You are on page 1of 190

Journal

The Capco Institute Journal of Financial Transformation

Applied Finance

Recipient of the Apex Awards for Publication Excellence 2002-2011

#32

08.2011

fisglobal.com

One company has the precision and focus to help you redefine value in a competitive market.
With an unmatched breadth of products and depth of knowledge, FIS is uniquely capable of helping you build and maintain a distinctive position in your market. We are the trusted partner that offers you the greatest range of choices not only in what technologies and services to deploy, but also how. This can include customizing software functions, private branding and tailoring customer interfaces or leveraging our deep suite of outsourcing and managed services. With 30,000 dedicated people in over 100 countries and the recent addition of the Capco organization, we can apply unprecedented power to your unique market strategy. Look closer and discover the power and value of FIS. Go to fisglobal.com

Visit our new Web site


Learn about our industry-leading products, services and solutions.

Go to fisglobal.com

FINANCIAL SOLUTIONS

PAYMENT SOLUT I ONS

B US I NES S CONS ULT I Ng

T EC H N O L O g Y S E RV IC E S

2011 Fidelity National Information Services, Inc. and its subsidiaries.

Journal
Editor
Shahin Shojai, Global Head of Strategic Research, Capco

Advisory Editors
Cornel Bender, Partner, Capco Christopher Hamilton, Partner, Capco Nick Jackson, Partner, Capco

Editorial Board
Franklin Allen, Nippon Life Professor of Finance, The Wharton School, University of Pennsylvania Joe Anastasio, Partner, Capco Philippe dArvisenet, Group Chief Economist, BNP Paribas Rudi Bogni, former Chief Executive Officer, UBS Private Banking Bruno Bonati, Strategic Consultant, Bruno Bonati Consulting David Clark, NED on the board of financial institutions and a former senior advisor to the FSA Gry Daeninck, former CEO, Robeco Stephen C. Daffron, Global Head, Operations, Institutional Trading & Investment Banking, Morgan Stanley Douglas W. Diamond, Merton H. Miller Distinguished Service Professor of Finance, Graduate School of Business, University of Chicago Elroy Dimson, BGI Professor of Investment Management, London Business School Nicholas Economides, Professor of Economics, Leonard N. Stern School of Business, New York University Michael Enthoven, Former Chief Executive Officer, NIBC Bank N.V. Jos Luis Escriv, Group Chief Economist, Grupo BBVA George Feiger, Executive Vice President and Head of Wealth Management, Zions Bancorporation Gregorio de Felice, Group Chief Economist, Banca Intesa Hans Geiger, Professor of Banking, Swiss Banking Institute, University of Zurich Peter Gomber, Full Professor, Chair of e-Finance, Goethe University Frankfurt Wilfried Hauck, Chief Executive Officer, Allianz Dresdner Asset Management International GmbH Michael D. Hayford, Corporate Executive Vice President, Chief Financial Officer, FIS Pierre Hillion, de Picciotto Chaired Professor of Alternative Investments and Shell Professor of Finance, INSEAD Thomas Kloet, Chief Executive Officer, TMX Group Inc. Mitchel Lenson, former Group Head of IT and Operations, Deutsche Bank Group Donald A. Marchand, Professor of Strategy and Information Management, IMD and Chairman and President of enterpriseIQ Colin Mayer, Peter Moores Dean, Sad Business School, Oxford University John Owen, Chief Operating Officer, Matrix Group Steve Perry, Executive Vice President, Visa Europe Derek Sach, Managing Director, Specialized Lending Services, The Royal Bank of Scotland ManMohan S. Sodhi, Professor in Operations & Supply Chain Management, Cass Business School, City University London John Taysom, Founder & Joint CEO, The Reuters Greenhouse Fund Graham Vickery, Head of Information Economy Unit, OECD Norbert Walter, Managing Director, Walter & Daughters Consult

Applied Finance
Part 1
9 Measuring Financial Supervision Architectures and the Role of Central Banks Donato Masciandaro, Marc Quintyn What Reforms for the Credit Rating Industry? A European Perspective Karel Lannoo Money Market Funds and Financial Stability: Comparing Sweden to the U.S. and Iceland Gudrun Gunnarsdottir, Maria Strmqvist Interest Rates After the Credit Crunch: Markets and Models Evolution Marco Bianchetti, Mattia Carlicchi Fat Tails and (Un)happy Endings: Correlation Bias, Systemic Risk and Prudential Regulation Jorge A. Chan-Lau Empirical Analysis, Trading Strategies, and Risk Models for Defaulted Debt Securities Michael Jacobs, Jr. Systemic Risk, an Empirical Approach Gonzalo de Cadenas-Santiago, Lara de Mesa, Alicia Sanchs Price of Risk Recent Evidence from Large Financials Manmohan Singh, Karim Youssef

Part 2
99 Simulation and Performance Evaluation of Liability Driven Investment (LDI) Katharina Schwaiger, Gautam Mitra

15

107 Behavioral Finance and Technical Analysis Kosrow Dehnad 113 The Failure of Financial Econometrics: Assessing the Cointegration Revolution Imad Moosa 123 A General Structural Approach For Credit Modeling Under Stochastic Volatility Marcos Escobar, Tim Friederich, Luis Seco, Rudi Zagst 133 A Stochastic Programming Model to Minimize Volume Liquidity Risk in Commodity Trading Emmanuel Fragnire, Iliya Markov 143 The Organization of Lending and the Use of Credit Scoring Techniques in Italian Banks Giorgio Albareto, Michele Benvenuti, Sauro Mocetti, Marcello Pagnini, Paola Rossi 159 Measuring the Economic Gains of Mergers and Acquisitions: Is it Time for a Change? Antonios Antoniou, Philippe Arbour, Huainan Zhao 169 Mobile Payments Go Viral: M-PESA in Kenya Ignacio Mas, Dan Radcliffe

23

35

49

59

75

89

Dear Reader,
Over the last ten years we have been proud to share with you the finance perspectives of both academics and practitioners, many of which have been provocative and heavily debated in light of the stress scenario we experienced in the 2008 financial crisis. Even now the markets remain challenged by the weight of sovereign debt and the potential for default. Applied finance is more than just impressive models. In recent years many had started to believe that finance was relatively simple, even though the underpinning methodologies and mathematical models had become increasingly more complicated. However, the recent crisis proved that finance is significantly more complex than the models that support it. Many of our Journals have focused on the complex models and finance theories that have become the tools of our trade. More recently we have been highlighting the operational and technology complexities that make these theories and models more vulnerable. These elements simply compound the complexity of the systemic risk within our world economies. The simple click of a button, to trade a single share, sets in motion an extremely complicated process in which there are still far too many human interventions. As the complexity of the instruments increase, so do the levels of risk and the complexity of the processes that follow the initiation of the trade. Consequently, for us at Capco, the complexities of finance are as much about the processes that are involved in managing Rob Heyvaert, Founder and CEO, Capco As always we welcome your candor and healthy debate. By so doing we continue to strive with you to form the future of finance. That is why, in this issue of the Journal and the next, the focus is on the complexity in finance from a more practical, and more importantly, managerial perspective. the industry and its many different participants as about simply price and managing complex instruments.

Finance Applied
The finance discipline is one of those rare branches of economics that is deemed to be practical by its very nature. It deals with how corporations use banks or financial markets to raise capital for investments that lead to economic development, how individuals or their representatives participate in this economic process by investing in shares and bonds, and how companies can put these funds to the best possible use. The finance discipline not only advises recipients of capital on how to best apply it. It also provides guidelines on how the markets should be structured to best serve and protect borrowers and investors, and how these investors should make investment decisions. For over 40 years it was accepted wisdom that the finance discipline was very effective in ensuring that markets operated in ways that best met the needs of the participants. Of course, with the introduction of more quantiWe at the Journal of Financial Transformation firmly believe that there is still quite a large gap between the research undertaken at leading finance academies, some of which has permeated into the real world, and what investors and borrowers need and apply. We believe that future research in finance should focus on what actually happens in the markets rather than what should. That is why this issue of the Journal is dedicated to applied finance. The The recent crisis has put the discipline under a very critical spotlight. Many have started to question whether 40 years of academic research have really helped create markets that are indeed efficient. They also question the validity of the tools developed to price complex instruments and risk itself the foundation of the entire discipline. We hope that you enjoy the practical nature of the papers in this edition of the Journal and that you will continue to support us by submitting your best ideas to us. tative tools, the subject became even more applied. This led to the development of more complex financing tools and investment vehicles. articles that have passed our rigorous review process not only make a genuine contribution to the field, they are also practical in their focus.

On behalf of the board of editors

Part 1
Measuring Financial Supervision Architectures and the Role of Central Banks What Reforms for the Credit Rating Industry? A European Perspective Money Market Funds and Financial Stability: Comparing Sweden to the U.S. and Iceland Interest Rates After the Credit Crunch: Markets and Models Evolution Fat Tails and (Un)happy Endings: Correlation Bias, Systemic Risk and Prudential Regulation Empirical Analysis, Trading Strategies, and Risk Models for Defaulted Debt Securities Systemic Risk, an Empirical Approach Price of Risk Recent Evidence from Large Financials

PART 1

Measuring Financial Supervision Architectures and the Role of Central Banks


Donato Masciandaro Professor of Economics, Chair in Economics of Financial Regulation,
Director, Paolo Baffi Centre on Central Banking and Financial Regulation, Bocconi University

Marc Quintyn Division chief, IMF Institute, International Monetary Fund

Abstract
Today, policymakers in all countries, shocked by the financial crisis of 2007-2008, are reconsidering carefully the features of their supervisory regimes. This paper reviews the changing face of the financial supervisory regimes before and after the crisis, introducing new indicators to measure the level of consolidation in supervision and the degree of the involvement of the central banks. 9

Until around 15 years ago, the issue of the shape of the financial supervisory architecture was considered irrelevant. The fact that only banking systems were subject to robust and systematic supervision kept several of the current issues in the sphere of irrelevance. Since then, financial market development, resulting in the growing importance of insurance, securities, and pension fund sectors, has made supervision of a growing number of non-bank financial intermediaries, as well as the investor protection dimension of supervision, highly relevant. In June 1998, most of the responsibility for banking supervision in the U.K. was transferred from the Bank of England to the newly established Financial Services Authority (FSA), which was charged with supervising all segments of the financial system. For the first time a large industrialized country as well as one of the main international financial centers had decided to assign the main task of supervising the entire financial system to a single authority, other than the central bank (the U.K. regime was labeled the tripartite system, stressing the need for coordination in pursuing financial stability between the FSA, the Bank of England, and the Treasury). The Scandinavian countries Norway (1986), Iceland and Denmark (1988) and Sweden (1991) had preceded the U.K. in the aftermath of domestic financial crises. But after that symbolic date of 1998, the number of unified supervisory agencies started to grow rapidly. Europe has been the center of gravity regarding the trend towards supervisory consolidation (or unification). In addition to the U.K., three old European Union member states Austria (2002), Belgium (2004), and Germany (2002) also placed financial supervision under a non-central bank single authority. In Ireland (2003) and the Czech and Slovak Republics (2006), the supervisory responsibilities were concentrated in the hands of the central bank. Five countries that are part of the E.U. enlargement process Estonia (1999), Latvia (1998), Malta (2002), Hungary (2000), and Poland (2006) also concentrated all supervisory powers in the hands of a single authority. Outside Europe, unified agencies have been established in, among others, Colombia, Kazakhstan, Korea, Japan, Nicaragua, and Rwanda. Then came the crisis. Most accounts of the contributing factors point to macroeconomic failures and imbalances as well as regulatory failures [see for instance, Allen and Carletti (2009), Brunnermeier et al. (2009), Buiter (2008)]. While some of these studies also mention, in passing, certain supervisory failures, some other studies analyze in more depth the contribution of supervisory failures to the crisis [for a survey see Masciandaro et al. 2011]. Some of them explicitly mention flaws in the supervisory architecture as a contributing factor in some countries [Buiter (2008) for the U.S. and the U.K. and Leijonhufvud (2009), for the U.S.]. Thus, in response to the 2007-2008 financial crisis, different countries 10 and regions the European Union, Belgium, Germany, Ireland, the U.K.,

and the U.S., among others are either implementing or evaluating the possibility of introducing reforms aimed at reshaping the supervisory architecture and the role of the central bank. In addition, Finland also established a unified supervisor in 2009. In July 2010, U.S. President Barack Obama signed into law the DoddFrank Act, which is considered the most important U.S. financial regulation overhaul since the Great Depression. A rethink of the roles and responsibilities of the Fed has been part of the broad financial legislation restyling. Despite the fact that during the discussions of the bill U.S. lawmakers debated the possibility of restricting some of the Feds regulatory responsibilities (supervision of small banks, emergency lending powers), as well as to increase the political control on the central bank with changes in its governance (congressional audits of monetary policy decisions, presidential nomination of the New York Fed Presidents), the Dodd-Frank law ended up increasing the powers of the Fed as a banking supervisor. In Europe, policymakers are moving to finalize reforms concerning the extent of central banks involvement in supervision, both at international and at the national levels. In 2009, the European Commission enacted a proposal for the establishment of a European Systemic Risk Council (ESRC) for macro-prudential supervision which should be dominated by the ECB. With regards to the individual E.U. members, in 2008 the German grand-coalition government expressed its willingness to dismantle the unique financial supervisor (BAFIN) in favor of the Bundesbank. Belgium also expressed an interest to follow suit. In June 2010, the U.K. government unveiled a reform of the bank supervisory system aimed at consolidating powers within the Bank of England. The key functions of the Financial Services Authority would be moved inside the Bank of England, which will become the Prudential Regulatory Authority. Finally, in summer 2010 the Irish Financial Services Regulatory Authority was legally merged with the central bank. These episodes show that the financial supervisory architecture remains in a state of flux, and the latest round seems to provide signals of a sort of Great Reversal, given that it has been shown that before the crisis the direction of the changes in the supervisory structure was characterized by central bank specialization in pursuing monetary policy as a unique mandate [Masciandaro and Quintyn (2009), Orphanides (2010)]. This paper reviews the changing face of the financial supervisory regimes by introducing new indicators to measure the level of consolidation in supervision and the degree of the involvement of the central banks. We define supervision as the activity that implements and enforces regulation. While regulation refers to the rules that govern the conduct of the intermediaries, supervision is the monitoring practice that one or more public authorities undertake in order to ensure compliance with the regulatory

The Capco Institute Journal of Financial Transformation


Measuring Financial Supervision Architectures and the Role of Central Banks

framework [Barth et al. (2006)]. However, we use the term regulatory and supervisory authorities interchangeably, as is done in most of the literature. In general, the focus of this paper is on micro-prudential supervision and consumer protection, where macro-prudential supervision is usually carried out by the central bank and competition policy is in the hands of a specialized authority [Borio (2003), Kremers et al. (2003), ihk and Podpiera (2007), Herring and Carmassi (2008)]. Micro-prudential supervision is the general activity of safeguarding financial soundness at the level of individual financial firms, while macro-supervision is focused on monitoring the threats to financial stability that can arise from macroeconomic developments and from developments within the financial system as a whole [Commission of the European Communities (2009)]. In particular we shed light on the reforms of the supervisory architecture. We classify as reforms those institutional changes implemented in a country which involved either the establishment of a new supervisory authority and/or the changing the powers of at least one of the already existing agencies.

of the agencies involved. The rationale by which the values are assigned simply considers the concept of unification (consolidation) of supervisory powers: the greater the unification, the higher the index value. The index was built on the following scale: 7 = single authority for all three sectors (total number of supervisors = 1); 5 = single authority for the banking sector and securities markets (total number of supervisors = 2); 3 = single authority for the insurance sector and the securities markets, or for the insurance sector and the banking sector (total number of supervisors = 2); 1 = specialized authority for each sector (total number of supervisors = 3). A value of 5 was assigned to the single supervisor for the banking sector and securities markets because of the predominant importance of banking intermediation and securities markets over insurance in every national financial industry. It is also interesting to note that, in the group of integrated supervisory agency countries, there seems to be a higher degree of integration between banking and securities supervision than between banking and insurance supervision. Consequently, the degree of concentration of powers, ceteris paribus, is greater.

Cross country comparisons of financial regulation architectures: the existing indicators


The literature on the economics of financial supervision architectures has zoomed in on the following phenomenon: before the crisis, an increasing number of countries had moved towards a certain degree of consolidation of powers, which in several cases has resulted in the establishment of unified regulators, different from the national central banks. Various studies [Barth et al. (2002), Arnone and Gambini (2007), ihk and Podpiera (2007)] claim that the key issues for supervision are (i) whether there should be one or multiple supervisory authorities and (ii) whether and how the central bank should be involved in supervision. More importantly, these two crucial features of a supervisory regime seem to be related. The literature tried to undertake a thorough analysis of the supervisory reforms, measuring these key institutional variables [Masciandaro (2004, 2006, 2007 and 2008), Masciandaro and Quintyn (2009)], i.e. the degree of consolidation in the actual supervisory regimes, as well as the central bank involvement in supervision itself. How can the degree of consolidation of financial regulation be measured? This is where the financial supervision unification index (FSU Index) becomes useful [Masciandaro (2004, 2006, 2007, and 2008)]. This index was created through an analysis of which, and how many, authorities in each of the examined countries are empowered to supervise the three traditional sectors of financial activity: banking, securities markets, and insurance. To transform the qualitative information into quantitative indicators, a numerical value has been assigned to each regime, to highlight the number

This approach does not, however, take into account another qualitative characteristic, namely that there are countries in which one sector is supervised by more than one authority. It is likely that the degree of concentration rises when there are two authorities in a given sector, one of which has other powers in a second sector. On the other hand, the degree of concentration falls when there are two authorities in a given sector, neither of which has other powers in a second sector. It would, therefore, seem advisable to include these aspects in evaluating the various national supervisory structures by modifying the index in the following manner: adding 1 if there is at least one sector in the country with two fields of authority, and one of these is also for at least one other sector; subtracting 1 if there is at least one sector in the country with two authorities assigned to its supervision, but neither of these authorities has responsibility for another sector; 0 elsewhere. Now we can consider what role the central bank plays in the various national supervisory regimes. Here, the index of the central banks involvement in financial supervision has been proposed [Masciandaro (2004, 2006, 2007 and 2008), Masciandaro and Quintyn (2009)]: the Central Bank as the Financial Authority index (CBFA). For each country, and given the three traditional financial sectors (banking, securities, and insurance), the CBFA index is equal to: 1 if the central bank is not assigned the main responsibility for banking supervision; 2 if the central bank has the main (or sole) responsibility for banking supervision; 3 if the central bank has responsibility in any two sectors; 4 if the central bank has responsibility in all three sectors. In evaluating the role 11

of the central bank in banking supervision, we considered the fact that, whatever the supervision regime, the monetary authority has responsibility in pursuing macro-financial stability. Consequently, we chose the relative role of the central bank as a rule of thumb: we assign a greater value (2 instead of 1) if the central bank is the sole or the main authority responsible for banking supervision.

Secondly, we consider the supervision power as a whole. Given different kinds of supervisory activity (banking supervision, securities markets supervision, insurance supervision) there is perfect substitutability among them in terms of supervisory power and/or supervisory skills. The supervisory power is a feature of each authority as agency, irrespective of where this supervisory power is exercised (agency dimension). Consequently, in each country and for each authority, we can sum the share of the supervisory power it enjoys in one sector with the share it owns in another one (if any). For each authority, as the degree of supervisory power increases, the greater the number of sectors over which that agency exercises monitoring responsibility. All three dimensions geographical, institutional and agency have both legal foundations and economic meaning. Finally, we prefer to adopt the HH index rather than the classic Gini index in order to emphasize the fact that the overall number of authorities matters. In general, the use of the HH index rather than other indices of concentration such as the entropy index gives more weight to the influence of the unified authorities, which is, as we stressed above, the main feature of the recent evolution in the shape of the supervisory regimes. We calculate the FSHH index by summing up the squares of the supervisory shares of all the regulators of a country. For each country, the FSHH index is equal to: H = n s2

Measuring consolidation and central bank involvement: new indicators


What are the shortcomings of the previous indices? They has been designed to be consistent with the aim of measuring the degree of consolidation of the supervisory powers using subjective weights in differentiating some cases, for example, in giving more relevance to the supervision on both banking and securities industries, or in evaluating other situations, for example, the degree of consolidation when there are at least two supervisors in one sector, or when a supervisor is in charge of more than one sector. Consequently, one type of improvement could be to reduce the role of the subjective weights. Starting with the supervisory architectures, we introduce two indicators to evaluate the two main characteristics highlighted in the literature: the degree of supervisory consolidation and central bank involvement in supervision. We propose the Financial Supervision Herfindahl Hirschman (FSHH) index. The FSHH is a measure of the level of consolidation of the supervisory powers that we derive by applying to this novel field the classical index proposed by Herfindahl and Hirschman [Hirschman (1964)]. We use the FSHH index to calculate the degree of supervisory consolidation. The robustness of the application of the FSHH to analyze the degree of concentration of power in financial supervision depends on the following three crucial hypotheses [Masciandaro et al. (2011)]. First of all, it must be possible to define both the geographical and institutional dimension of each supervisory market so that it would be possible to define the different sectors to be supervised (institutional dimension) in each country (geographical dimension). In other words, in every country each financial market forms a distinct market for supervision. It is still possible to identify both the geographical dimension, i.e., the existence of separate nations, and the institutional dimension, i.e., the existence of separate markets notwithstanding the fact that the blurring of the traditional boundaries between banking, securities, and insurance activities and the formation of large conglomerates diluted the definition of the intermediaries. Then, in each sector we can define the distribution of the supervisory powers among different authorities if more than one agency is present and consequently their shares without ambiguity. For each sector, as the degree of supervision consolidation falls, the greater the 12 number of authorities involved in monitoring activity.

i=1 i

where si is the share of supervisory power of the authority i and N is the total number of authorities. For each authority i, we consider that in each country there are three sectors to supervise (each sector has the same importance) and that in each sector we can have more than one authority (each authority has the same importance). We use the following formula si = m sj ; and sj = 1/m 1/qj j=1 where m is the number of sectors where the authority i is present as supervisor and q is the number of authorities involved in supervision in each sector j. In other words, if in one sector there is more than one authority, the supervisory power is equally divided among the supervisors. We can use the FSHH index to provide a quantitative perspective on the state of the art of the supervisory regimes. We analyze the situation before and after the recent crisis, according to country income and regional adherence. Figure 1 provides this perspective. First, it shows that before the crisis 2007, blue bars the degree of consolidation was on average greater in the European Union than in the industrial countries as a whole or the overall European region. These three groups score higher than the overall country sample. Second, the consolidation process has progressed in the above three groups of countries during the crisis 2009, red bars while the overall sample shows a slight reduction.

The Capco Institute Journal of Financial Transformation


Measuring Financial Supervision Architectures and the Role of Central Banks

CBBSS FSHH index 80 70 60 50 40 30 10 20 10 0 ALL OECD EUROPE E.U. Countries 5 0 25 20 15 35 30

ALL

OECD

EUROPE

E.U. Countries

Figure 1 Financial supervision unification

Figure 2 Central bank involvement in supervision

Our data, therefore, show that also during the crisis the supervisory reforms were driven by a general tendency to reduce the number of agencies to reach the unified model, unknown before 1986, or the so-called peak model, which characterized the two decades between 1986-2006 [Masciandaro and Quintyn (2009)]. Now, the new methodology can also be used to construct the index of the central bank involvement in supervision: the Central Bank as Financial Supervisor (CBFS) index. The intuition is quite simple: the greater the share of the central bank, the higher the odds that the central bank will be involved into the overall regulatory organization. In other words, central bank involvement in supervision is likely to be at its maximum where the central banker is the unified supervisor in charge, while the involvement is likely to be low, the smaller the number of sectors where the central bank has supervisory responsibilities. To construct the CBFS index we just need to take the share of the central bank in each country, which can go from 0 to 1. Again we can use the CBSS index to offer a numerical description of the degree of central bank involvement in supervision, before and after the crisis. Two facts emerge. Figure 2 shows that before the crisis 2007, yellow bars the industrial countries have on average a lower level of involvement and that the European countries as well as the E.U. member states experience even less central bank involvement in supervision than the overall sample. However, in response to the crisis we note a sort of Great Reversal: the 2009 data green bars show that in the industrialized, European, and E.U. countries central bank involvement increased. The new trend can be explained using at least two different reasons.

First of all, in some countries the central bank can be more involved in supervision because the monetary responsibilities are not completely in their hands. Among the central banks which do not have full responsibility for monetary policy, such as those of the countries belonging to the European Monetary Union, some countries chose the route of the central bank specialization in supervision, with the following being examples of those countries in which this is strongest: Czech Republic, Ireland, Netherlands, and the Slovak Republic. In general, it has been noted [(Herring and Carmassi (2008)] that the central banks of members of the EMU have become financial stability agencies. Second, the experience of recent years has stressed the importance of overseeing systemic risks in the system. In other words, it is crucial to monitor and assess the threats to financial stability that can arise from macroeconomic as well as macro-financial developments (the so-called macro-supervision). The increasing emphasis on macro-supervision motivates the policymakers to identify specific bodies responsible for macro-supervision. To carry out macro-prudential tasks information on the economic and financial system as a whole is required. The current turmoil has stressed the role of the central banks in the prevention, management, and resolution of financial crises. Consequently, this view is gaining momentum that the central banks are in the best position to collect and analyze this kind of information, given their role in managing in normal times the monetary policy and in exceptional times the lender of last resort function. From the policymakers point of view, therefore, the involvement of the central bank in the macro-supervision area means potential benefits in terms of information gathering. At the same time, they can postulate that 13

the potential costs of the involvement are smaller with respect to the case of micro-supervision [moral hazard risk, conflict of interest risk, powerful bureaucracy risk; see Masciandaro (2008)]. In other words, the separation between micro- and macro-supervision can be used to reduce the arguments against the central bank involvement.

Financial Services Authority, 2009, The Turner review, March, London Kremers J., D. Schoenmaker, and P. Wierts, 2003, Cross sector supervision: which model? in Herring, R., and R. Litan (eds), Brookings-Wharton Papers on Financial Service Goodhart, C. A. E., 2007, Introduction, in Masciandaro D. and M. Quintyn (eds.), Designing financial supervision institutions: independence, accountability and governance, Edward Elgar Hardy, D., 2009, A European mandate for financial sector supervisors in the EU, IMF Working Paper WP/09/5 Herring R. J and J. Carmassi, 2008, The structure of crosssector financial supervision, Financial Markets, Institutions and Instruments, 17:1, 51-76 Hirschman, A. O., 1964, The paternity of an index, American Economic Review, 54:5, 761-762 House of the Lords, 2009, The future of EU of financial regulation and supervision, European Union Committee, 14th report of session 2008-2009, HL paper 106-I, Authority of the House of the Lords Leijonhufvud, A., 2009, Curbing instability: policy and regulation, CEPR Policy Insight, no. 36, July Masciandaro, D., 2004, Unification in financial sector supervision: the trade off between central bank and single authority, Journal of Financial Regulation and Compliance, 12:2, 151-169 Masciandaro, D., 2006, E Pluribus Unum? Authorities design in financial supervision: trends and determinants, Open Economies Review, 17:1, 73-102 Masciandaro, D., 2007, Divide et Impera: financial supervision unification and the central bank fragmentation effect, European Journal of Political Economy, 23:2, 285-315 Masciandaro, D, 2008, Politicians and financial supervision unification outside the central bank: why do they do it? Journal of Financial Stability, 5:2, 124-147 Masciandaro, D. and M. Quintyn, 2009, Reforming financial supervision and the role of the central banks: a review of global trends, causes and effects (1998-2008), CEPR Policy Insight, n.30, 1-11 Masciandaro, D., R. V. Pansini, and M. Quintyn, 2011, Economic crisis: do financial supervision matter? Paper presented at 29th SUERF Colloquium, Brussels. Orphanides A., 2010, Monetary policy lessons from the crisis, Discussion Paper Series, CEPR, no. 7891 Scott, K., 1977, The dual banking system: a model of competition in regulation, Stanford Law Review, 30:1, pp. 149 World Bank and IMF, 2005, Financial sector assessment: a handbook, World Bank and IMF, Washington D.C.

Conclusion
The wave of reforms in supervisory architectures across the globe that we have witnessed since the end of the 1990s leaves the interested bystander with a great number of questions regarding the key features of the emerging structure, their true determinants, and their effects on the performance of banking and financial industries. This paper reviews the changing face of the financial supervisory regimes, introducing new indicators to measure the level of consolidation in supervision and the degree of the involvement of the central banks. We show that the new Financial Supervision Herfindahl Hirschman indexes are (i) consistent with the previous one, (ii) but at the same time more precise, (iii) more robust, given that they exclude subjective weights and (iv) more easy to use and interpret, given that they apply a well-known methodology for measurement. The new indices can be used in empirical studies on the determinants of the reform process and on the impact of the new architectures of financial supervision.

References

14

Allen, F. and Carletti, E., 2009, The global financial crisis: causes and consequences, Mimeo Arnone, M. and A. Gambini, 2007, Architecture of supervisory authorities and banking supervision, in: Masciandaro D. and M. Quintyn (eds.), Designing financial supervision institutions: independence, accountability and governance, Edward Elgar Barth, J. R., G. Caprio, and R. Levine, 2006, Rethinking bank regulation. Till angels govern, Cambridge University Press Barth, J. R., D. E. Nolle, T. Phumiwasana, and G. Yago, 2002, A cross country analysis of the bank supervisory framework and bank performance, Financial Markets, Institutions & Instruments, 12:2, 67-120 Borio C., 2003. Towards a macroprudential framework for financial regulation and supervision? BIS Working Papers, no.128, Bank of International Settlements, Geneva Brown, E. F, 2005, E Pluribus Unum out of many, one: why the United States need a single financial services agency, American Law and Economics Association Annual Meeting Brunnermeier, M. et al., 2009, The fundamental principles of financial regulation, Geneva Reports on the, World Economy number 11 Buiter, W., 2008, Lessons from the North Atlantic financial Crisis, paper presented at the conference The role of money markets, Columbia Business School and Federal Reserve of New York, May 29-30 CEPS, 2008, Concrete steps towards more integrated financial oversight, CEPS Task Force Report, December ihk, M. and R. Podpiera, 2007, Experience with integrated supervisors: governance and quality of supervision, in: Masciandaro, D. and M. Quintyn (eds.), Designing financial supervision institutions: independence, accountability and governance, Edward Elgar Coffee, J., 1995. Competition versus consolidation: the significance of organizational structure in financial and securities regulation, Business Lawyer, vol 50 Commission of the European Communities, 2009, Communication from the Commission. European financial supervision, COM (2009) 252 final Council of the European Union, 2009, Council conclusions on strengthening EU financial supervision, 2948th Economic and Financial Affairs, Luxembourg, June 9 De Larosire Group, 2009, Report of the high level group on supervision Department of the Treasury, 2008, Blueprint for a modernized financial regulatory structure, Washington, D.C. Department of the Treasury, 2009, Financial regulatory reform: a new foundation, Washington, D.C.

PART 1

What Reforms for the Credit Rating Industry? A European Perspective


Karel Lannoo Chief Executive Officer, CEPS, and Director, ECMI1

Abstract
Credit rating agencies were the first victim of the crisis, with a regulation adopted in a period of six months a record by E.U. standards. The regulation subjects E.U.-based CRAs to a mandatory license and strict conduct of business rules, whereas, unlike in the U.S., no rules had been in place before. This article discusses the role of credit ratings agents today, the regulatory role of ratings, the scope of the E.U. regulation, and the regulatory approach of the business models of the large ratings agents. It concludes that the regulation should have impacted the business model of ratings agents more fundamentally.
1 This paper was initially prepared for the Competition Committee of the OECD, meeting in Paris, 16 June 2010. Comments from participants at that meeting are gratefully acknowledged, as well as from Piero Cinquegrana, Chris Lake, Barbara Matthews, and Diego Valiante.

15

Credit rating agencies (CRAs) continue to find themselves in the eye of the storm. Despite having singled out the industry early on in the financial crisis as needing more regulation, policymakers seem not to be reassured by the measures that have been adopted in the meantime, and want to go further. Faced with a rapid downgrading in ratings in the context of the sovereign debt crisis, European Commissioner Michel Barnier raised the possibility last May of creating a new E.U.-level agency that would specialize in sovereign debt and regulating CRAs. The debate on the role of rating agents considerably pre-dates this crisis. As early as the 1997 South-East Asia crisis, the delayed reaction of rating agents to the public finance situation of these countries was strongly criticized. The same criticism of CRAs was leveled in the dot.com bubble in 2001. Many reports were written on their role in that episode, but it was not until mid-2008 that a consensus emerged in the E.U. that the industry was in need of statutory legislation. In the meantime, the U.S. had adopted the Credit Rating Agency Reform Act in 2006. At the global level, in 2003, the International Organisation of Securities Commissions (IOSCO) adopted a Statement of Principles on the role of credit rating agencies, but apparently the initiative has not been successful. Rating agents pose a multitude of regulatory problems, none of which can be solved easily. Some of these are specific to the profession and the current market structure, whereas others are of a more generic nature. Some are related to basic principles of conduct in the financial services sector, while others are part of horizontal market regulation. The financial crisis also demonstrated the important role of rating agents in financial stability, which involves macro-prudential authorities.

Moodys investor services was incorporated in 1914 as a bond rating and investment analysis company. Today, the listed company Moodys Corporation is the parent company of Moodys Investors Service, which provides credit ratings and research covering debt instruments and securities, and Moodys Analytics, which encompasses non-ratings businesses, including risk management software for financial institutions, quantitative credit analysis tools, economic research and data services, data and analytical tools for the structured finance market, and training and other professional services. Combined, they employ about 4,000 people. Standard & Poors was incorporated in 1941, following the merger of two firms active in credit risk analysis. Both firms originated in similar circumstances as Moodys, in the context of the huge industrial expansion of the U.S. in the second half of the 19th and early 20th centuries. S&P was taken over by McGraw Hill in 1966, the listed media concern, and it forms the most important part of the group in terms of revenues, and even more so in profits (about 73 percent), although these have seriously declined since 2007. S&P financial services, which includes the ratings service, employs about 7,500 people. Fitch Ratings by far the smaller European player in the sector with headquarters in New York and London is part of the Fitch Group. The Fitch Group also includes Fitch Solutions, a distribution channel for Fitch Ratings products, and Algorithmics, a leading provider of enterprise risk management solutions. The Fitch Group has been a majority-owned subsidiary (60 percent) of Fimalac S.A. since 1997, which has headquarters in Paris, and is listed on Euronext, but with a very low free float. Fitch grew through acquisitions of several smaller ratings agents, including IBCA and Duff & Phelps. Fitch employs 2,266 people.
Box 1 The big three

pioneered by the U.S. Unlike the bank-driven model, which is common

The credit ratings industry today


The credit ratings industry is a global business, controlled by a handful of players, two of which are of U.S. parentage. Moodys and Standard & Poors alone possess more than four-fifths of the market. With Fitch, the three leading players dominate over 94 percent of the global market [European Commission (2008)]. See brief portraits of these three companies in Box 1. As shown in Table 1, the three groups have suffered serious declines in revenue since 2007, especially Fitch. Its revenues have declined by 26 percent since 2007, and its net income by 70 percent. This may confirm the finding discussed below that more competition does not necessarily improve the quality, but that newcomers, in this case Fitch, attempt to attract market share with a short-term strategy. Firms may also have abandoned ratings, which cost between 45,000 and 90,000 per annum, plus 0.05 percent of total value of a bond emission. Table 1 further indicates that the market share of the three firms has been fairly constant over the period 2006-09. That the credit rating business is essentially of American parentage 16 should be no surprise, as it is an intrinsic part of the market-driven system

in Europe, a market-driven system relies upon a multi-layered system to make it work [Black (2001)]. Reputational intermediaries, such as investment banks, institutional investors, law firms, and rating agents, and selfregulatory organizations, such as professional federations and standardsetters, play an important role to make the system, in between issuers and supervisors, work. In effect, financial markets are constantly affected by adverse selection mechanisms, and investors need third-party tools such as credit ratings in order to reduce asymmetric information and increase their ability to understand the real risk of financial instruments. Since there had not been much of a capital market in Europe until recently, banks have essentially performed the credit-risk analysis function, and continue to do so. But the credit-risk analysis function of European banks declined, possibly as a result of the reputational strength of the U.S. capital market model. The introduction of the euro and a set of E.U. regulatory measures led to the rapid development of European capital markets, and demand for ratings. Moreover, European authorities created a captive market for an essentially U.S.-based industry.

The Capco Institute Journal of Financial Transformation


What Reforms for the Credit Rating Industry? A European Perspective

2006 Moodys Turnover Net income S&Ps Turnover Net income Fitch Turnover Net income 2037.1 753.9 2750 n.a. 655.6 n.a.

2007 2259 701 3046.2 440.16 827.4 120.2

2008 1775.4 461.6 2653.3 327.8 731.2 44

2009 1797.2 407.1 2610 307.4 613.5 35.8

07-09 -20.4 -41.9 -14.3 -30.2 -25.9 -70.2

designated five rating firms as qualified to calculate risk weights for the standardized approach: the big three and two smaller Japanese firms. The use of rating agents is possibly even more prevalent in the assessment of marketable assets used as collateral in the ECBs liquidity-providing operations. The credit assessment for eligible collateral is predominantly based on a public rating, issued by an eligible External Credit Assessment Institution (ECAI). In the ECBs definition, an ECAI is an institution whose credit assessments may be used by credit institutions for determining the risk weight of exposures according to the CRD.6 The minimum credit quality threshold is defined in terms of a single A credit assessment,7 which was temporarily relaxed during the financial crisis to BBB-. If multiple and possibly conflicting ECAI assessments exist for the same issuer/debtor or guarantor, the first-best rule (i.e., the best available ECAI credit assessment) is applied.8 The liquidity categories for marketable assets are subdivided into five categories, based on issuer classification and asset type, with an increasing level of valuation haircuts, depending on the residual maturity.9 An important group of assets in the context of the financial crisis, classified as category V, are the asset-backed securities (ABS), or securitization instruments. The extent to which banks used ABS collateral in liquidity operations rose dramatically after mid-2007, from 4 percent in 2004 to 18 percent in 2007 and 28 percent in 2008 [Fitch (2010)]. Within ABS, residential mortgage-backed securities (RMBS) form the most important element, exceeding 50 percent. These securitization instruments, and in particular the residential mortgage-backed securities segment, were an extremely important market for CRAs. Moodys, for example, assigned the AAA rating to 42,625 RMBS from 2000 to 2007 (9,029 mortgagebacked securities in 2006 alone), and later had to downgrade the assets. In 2007, 89 percent of those originally rated as investment grade were reduced to junk status.10

Sources: 10-K filings to the U.S. SEC by Moodys and McGraw-Hill, other filings by Fimalac and Hoover, S&Ps and Fitchs website.

Table 1 Turnover and net income of the big three ratings businesses, 2006-09 ($ millions)

A captive market for CRAs in the E.U.


Two forms of regulation have given the CRAs a captive market in the E.U.: Basel II,2 implemented in Europe as the Capital Requirements Directive (CRD), and the liquidity-providing operations of the European Central Bank (ECB). Both explicitly use the rating structure of the CRAs to determine risk weighting for capital requirement purposes thresholds in the former case and haircuts3 for the ECBs liquidity-providing operations. The U.S. does not use either of these practices, as it has not implemented Basel II (largely because because the Federal Reserve did not want to have the vast majority of U.S. banks relying on CRAs for setting regulatory risk weights), and the discount window of the Fed is not based upon ratings. The Dodd-Frank Wall Street Reform and Consumer Protection Act of July 2010 goes even further, requiring regulators to remove any references
4

from their rules to investment grade and credit ratings of securities.5 The Basel II proposals were finalized in November 2005 after lengthy discussions, among other things, because of its pro-cyclical impact and the use of private sector rating agents. In its standardized approach, to be used by less sophisticated banks, it bases risk weightings on rating agents assessments. The capital requirements increase with the decline in the rating, from 0 percent for AA-rated (and higher) government bonds, or a minimum of 20 percent for banks and corporates up to 150 percent for CCC or below. However, in the E.U.s CRD, the risk weighting is 0 percent across the board for all sovereigns in the European Economic Area (EEA) funded in domestic currency. A zero-risk weighting means that a bank does not have to set any capital aside for these assets. No indication has been given so far that the reliance on rating agents for the risk weightings will be changed in the Basel III proposals, published on 12 September 2010. Since CRAs were not subject to E.U. regulation at the time the CRD was adopted, the Committee of European Banking Supervisors (CEBS) issued Guidelines on the recognition of external credit assessment institutions in January 2006. These guidelines set criteria for determining external credit assessments on the basis of the CRD risk weights. The use of a rating agent for the purposes of the CRD is thus the prerogative of the national supervisory authorities. For comparison, the Japanese FSA has

The second set of the recommendations issued in June 2004 by the Basel Committee on Banking Supervision, Basel II creates an international standard that banking regulators may use in establishing regulations governing how much capital banks must set aside to counter the financial and operational risks they face. 3 A deduction in the market value of securities being held by brokerage and investment banking firms as part of net worth for calculating their net capital. 4 Public Law 111 203 Dodd-Frank Wall Street Reform and Consumer Protection Act (http://www.gpo.gov/fdsys/pkg/PLAW-111publ203/pdf/PLAW-111publ203.pdf) 5 Clifford Chance (2010), p. 73. 6 ECB (2006), The implementation of monetary policy in the Euro Area, General documentation on Eurosystem monetary policy instruments and procedures, September, p. 43. 7 Single A means a minimum long-term rating of A- by Fitch or Standard & Poors, or a A3 rating by Moodys [ECB (2006)]. 8 ECB (2008), The implementation of monetary policy in the Euro Area, General documentation on Eurosystem monetary policy instruments and procedures, November, p. 42. 9 The liquidity categories were changed in September 2008 and the valuation haircuts increased in July 2010. See latest changes to risk control measures in Eurosystem credit operations, European Central Bank, Press notices, 4 September, 2008 and 28 July, 2010. 10 According to Phil Angelides, Chairman of the ten-member Financial Crisis Inquiry Commission appointed by the U.S. government to investigate the causes of the financial crisis, quoted in Bloomberg, 2 June 2010.

17

The E.U. rating agencies regulation


As the financial crisis erupted, the developments recounted above and others rapidly led to a policy consensus that rating agents should be regulated at E.U. level. The proposal for a regulation was published in November 2008, and adopted in April 2009, a minimum interval in E.U. decision-making.11 The regulation was the first new E.U. legislative measure triggered by the financial crisis. It is also one of the first financial services measures to be issued as a regulation, meaning it is directly applicable, rather than a directive, which has to be implemented in national law. The E.U. was not starting from scratch. Back in 2004, further to an own initiative report of the European Parliament (Katifioris report), the European Commission asked the Committee of European Securities Regulators (CESR) for technical advice regarding market practice and competitive problems in the CRAs. In a Communication published in December 2005, it decided that no legislation was needed for three reasons: 1) three E.U. directives already cover ratings agents indirectly: the market abuse Directive, the CRD, and MiFID; 2) the 2004 Code of Conduct Fundamentals for Credit Rating Agencies,12 published by the IOSCO; and 3) self-regulation by the sector, following the IOSCO Code.13

registration, supervision, the endorsement regime, and supervisory reporting; and by 7 September 2010, regarding enforcement practices, rating methodologies, and certification. CESR has to report annually on the application. The novelty in the regulation is the central role of CESR in providing advice regarding the requirement for registration by a CRA in an E.U. member state, and in informing all the other member states. The home and host member states to the CRA are required to establish a college and are required to cooperate in the examination of the application and in day-to-day supervision. Host member states are not only those where a CRA has a branch, they are also those where the use of credit ratings is widespread or has a significant impact. In these circumstances, the host country authority may at any time request to become a member of the college (Art. 29.3). Host countries can also act against an agency deemed to be in breach of its obligations (Art. 25). CESR has the authority to mediate between the competent authorities (Art. 31), which had the effect of pre-empting its transformation into a securities market authority under the proposals discussed as further to the de Larosire report.15 As the industry is essentially of U.S. parentage, a focal point in the dis-

In 2006, in a report for the Commission, the CESR concluded that the rating agents largely complied with the IOSCO Code.14 But concerns remained regarding the oligopoly in the sector, the treatment of confidential information, the role of ancillary services, and unsolicited ratings. In a follow-up report published in May 2008, focusing especially on structured finance, the CESR strongly recommended following the international market-driven approach by improving the IOSCO Code. Tighter regulation would not have prevented the problems emerging from the loans to the U.S. subprime housing market, according to the CESR. Notwithstanding the CESRs advice, the Commission went ahead and issued a proposal in November 2008, after two consultations in July and September 2008. The E.U. regulation requires CRAs to be registered and subjects them to ongoing supervision; defines the business of the issuing of credit ratings; sets tight governance (board structure and outsourcing), operational (employee independence and rotation, compensation, prohibition of insider trading, record keeping) and conduct of business (prohibition of conflicts of interest in the exercise of ratings or through the provision of ancillary services to the rated entity) rules for CRAs; requires CRAs to disclose potential conflicts of interest and its largest client base; and requires CRAs to disclose their methodologies, models, and rating assumptions. CESR is mandated to set standards for methodologies and establish a central repository with the historical performance data. The regulation came into force 20 days after its publication in the Official Journal, on 7 December, 2009. But guidance had to be provided by 18 CESR before the regulation could take effect, by 7 June, 2010, regarding

cussions was the third country regime. The regulation states that CRAs established in a third country may apply for certification, provided that they are registered and subject to supervision in their home country, and that the Commission has adopted an equivalence decision. However, credit ratings issued in a third country can only be used if they are not of systemic importance to the E.U.s financial stability (Art. 5.1), meaning that all large CRAs need to be fully registered in the E.U. system. In addition, credit ratings produced outside the E.U. have to be endorsed by the CRA registered in the E.U., subject to a series of conditions (Art. 4.3). It has been argued that this regime will unnecessarily fragment global capital markets. Foreign companies will be less inclined to raise capital in the E.U. as they need a local endorsement of their rating. E.U. financial institutions will invest less abroad, as the ratings on third country investments may be seen to be of insufficient quality, unless they are endorsed in the E.U., or their rating agents are equivalent. The regime could also be qualified as anti-competitive, as smaller CRA without an E.U. presence, such as the two largest CRAs in Asia, may stop rating E.U. sovereigns and issuers. Establishing a local presence in the E.U. could be too costly, and the client base for these ratings would as a result diminish, since they can no longer be used by European banks [St. Charles (2010)].

11 Regulation 1060/2009 of 16 September 2009, OJ 17.11.2009. 12 See http://www.iosco.org/library/pubdocs/pdf/IOSCOPD180.pdf. 13 Communication from the Commission on Credit Rating Agencies (2006/C 59/02), OJ C 59/2 of 11.03.2006. 14 CESRs Report to the European Commission on the compliance of rredit rating agencies with the IOSCO Code, CERS, 06-545. 15 Report of the High-Level Group on Financial Supervision in the E.U., chaired by Jacques de Larosire, 25 February, 2009, Brussels.

The Capco Institute Journal of Financial Transformation


What Reforms for the Credit Rating Industry? A European Perspective

The new E.U. regime for CRAs is very closely aligned with the new U.S. regime, as introduced by the Dodd-Frank Bill. Whereas the U.S. had already legislated the sector in 2006 with the Credit Rating Agency Reform Act, this was a light regime that required CRAs to register with the Securities and Exchange Commission (SEC) in Washington, D.C., as a Nationally Recognized Statistical Rating Organization (NRSRO). The Dodd-Frank Bill fundamentally alters this regime by requiring tight operational (internal controls, conflicts of interest, qualification standards for credit rating analysts) and governance requirements, and detailed disclosure requirements (including disclosure of the methodologies used). The SEC is required to create an Office of Credit Ratings to implement the measures of the Bill, to issue penalties and to conduct annual examinations and reports.
Source: Clifford Chance (2010)

hazard, it gives a regulatory blessing and will further reduce the incentives for banks to conduct proper risk assessments. It creates the illusion that the industry will live up to the new rules, and that these will adequately supervised. For Pagano and Volpin (2009), the preferred policy is more drastic: 1) ratings should be paid for by investors, and 2) investors and ratings agencies should be given free and complete access to all information about the portfolios underlying structured debt securities. The investor-pays principle was the rule in the U.S. until the 1970s, but because of increasingly complex securities in need of large resources and the fear of declining revenues resulting from the dissemination of private ratings through new information technologies, the issuer-pays principle was introduced. Pagano and Volpin do not discuss, however, how to deal with free riding. But moving back to the investor-pays principle may also require further regulation to prohibit the sale of ancillary services by CRAs to issuers.

Box 2 The Dodd-Frank Bill and CRAs

The amendments tabled by the Commission on 2 June, 2010 modify the regulation to accommodate the imminent creation of the European Securities Market Authority (ESMA), and to further centralize the supervision of CRAs.16 ESMA would become the sole supervisor, for the sake of efficiency and consistency, doing away with the complex system described above. National supervisors will remain responsible, however, for the supervision of the use of credit ratings by financial institutions, and can request ESMA to withdraw a license. ESMA can ask the European Commission to impose fines for non-respect of provisions of the regulations (see Annex III of the proposal). ESMA may also delegate specific supervisory tasks to national authorities. The proposal does not, however, propose any involvement of the European Systemic Risk Board (ESRB), which could have been useful in the control of the methodologies and the macroeconomic models used by CRAs. The draft regulation finally requires issuers of structured finance instruments to disclose the same information which they have given to the CRA, as is the case under the U.S. SECs Rule 17g-5. This change was welcomed by the markets as it would make both regimes convergent.

The E.U. regulation goes in the direction of requiring more disclosure (see Annex I, Section E of the regulation), but it is questionable whether investors will read this. On the contrary, given that a supervisory fiat has been given, investors may be even less inclined to read all the information, as was demonstrated during the financial crisis. Making investors pay would bring the ratings agents closer to the profession of analysts and investment advisors, which is regulated under the E.U.s Market in Financial Instruments Directive (2004/39). MiFID requires investment advisors to be licensed, to act in the best interests of their clients, and to identify, disclose, and avoid conflicts of interest. MiFID also states that firewalls must be constructed between analysts and sales departments in banks. Ponce (2009) discusses an interesting alternative to the issuer-pays and investor-pays models: the platform-pays model. He demonstrates on the basis of large datasets that the transition from the investor-pays to the issuer-pays model had a negative impact on the quality of the ratings. Under the issuer-pays model, a rating agency may choose a quality standard below the socially efficient level. In this case, Ponce argues, a rating agency does not internalize the losses that investors bear from investing in low-quality securities. A rating agent may give ratings to low-quality securities in order to increase its revenues. To avoid this, Ponce proposes the platform-pays model, which takes the form of a clearing house for ratings, complemented by prudential oversight of ratings quality to control for bribery. The platform assigns the agent (based on performance and experience) and the issuer pays up front. This would at the same time overcome the oligopoly problem. The problem with this model, however, is that its governance wills need to be completely watertight.

The regulatory debate


The E.U.s regulation does not alter the fundamental problem that CRAs pose from a public policy perspective: 1) the oligopolistic nature of the industry, 2) the potential conflict of interest through the issuer-pays principle and, 3) the public good of the private rating. The E.U. approach seems to be a second-best solution. A more fundamental review is needed of the business model of the CRAs, and for which other industry sectors could provide useful alternative models. On the structure of the industry, the E.U. increases the barriers to entry by introducing a license and setting tight regulation, rather than taking the oligopolistic nature as one of the fundamental reasons for the abuses. In addition, since statutory supervision of the industry may increase moral

16 Proposal for a regulation of the European Parliament and of the Council on amending regulation (EC) No 1060/2009 on credit rating agencies, COM(2010) 289/3.

19

important third country equivalence regime. It is interesting to note in this


12% 10% 8% 6% 4% 4% 2% 0%
C CC CCC- CCC CCC+ BB B+ BBBB BB+ BBB- BBB BBB+ AA A+ AAAA AA+ AAA

context that at least two audit firms have recently expressed an interest
Low competition High competition

in starting a rating business. The downside of the partnership model is the liability problem, however, which may deter many from being active in that business. During the sovereign debt crisis, European and national policymakers have repeatedly raised the possibility of creating local CRAs, eventually even government-sponsored entities. A state-controlled CRA would lack independence, and hence credibility, and, as demonstrated above, it is not necessarily more competition that will solve the problem.

Source: Becker & Milbourn (2009).

Conclusion
Considering the policy alternatives outlined above, the E.U. and the U.S. should probably have considered the specificities of the sector more carefully before embarking upon legislation. The legislation that was adopted does not alter the business model of the industry and gives rise to side effects, the most important of which is the supervisory seal. Given

Figure 1 Firm credit ratings distribution: high and low competition in the industry

Other research finds that more competition would not necessarily improve standards, however. New entrants do not necessarily improve the quality of ratings. On the contrary, they attract business by friendly and inflated ratings. As competition reduces future rents, it increases the risk of the short-term gains by cheating. In an analysis of the corporate bond markets, Becker and Milbourn (2009) find a significant positive correlation between the degree of competition and the level of the credit ratings (Figure 1). Concretely, they find a positive correlation between Fitchs entrance in the market and ratings levels, without exception. Considering that incentives and reputational mechanisms are key, Larry Harris (2010) proposes an entirely different approach. He takes his inspiration from the bonus debate in the banking sector, and proposes to defer a part of the payment based upon results. Given that credit ratings are about the future, the performance of the securities rated would be the indicator for the fees rating agents can charge. An important part of the fees would be put into a fund, against which the ratings agencies could borrow to finance their operations. Disclosure of these deferred contingent compensation schemes would be required, so that investors can decide for themselves which schemes provide adequate incentives. Another possibility for creating the right incentives is to move to a partnership structure in the rating business, as is common in the audit sector. The audit sector has several similarities with rating agencies: in the type of work, the importance of reputation and global presence, the network economies and the oligopolistic structure, and the conflicts of interest. The audit sector is regulated by an E.U. Directive (2006/43/EC) that brought the sector under statutory supervision. It sets tight rules on governance and quality control, and limits the degree of non-audit services 20 that audit firms can perform for an audit client. This directive also has an

the depth of the financial crisis and the central role played by ratings agents, certainly in the E.U., a more profound change would be useful, towards the platform-pays model or a long-term incentive structure, as discussed above. The E.U. regulation, as adopted, consolidates the regulatory role of CRAs in the E.U. system, but the price is high. It fragments global capital markets, as it introduces a heavy equivalence process, and requires a local presence of CRAs and endorsement of systemically important ratings. It is at the same time protectionist. Under the new set up, CESR and its successor, ESMA, are given a central role in the supervision of CRAs, but the question is whether they will be able to cope. The supervisor needs to check compliance with the basic requirements to decide on a license and to verify adherence to the governance, operational, methodological, and disclosure requirements imposed upon CRAs. This is a heavy workload, especially considering that no supervision had been in place until a few months ago. Given the present debate on the role of CRAs in financial stability and the need for technical expertise, the European Systemic Risk Board could have been involved, but this seems not to have been considered, at least for now. On the other hand, the advantage of having a regulatory framework in place is that the Commissions competition directorate can start scrutinizing the sector from its perspective. To our knowledge, the competition policy dimensions of the CRA industry in Europe have not been closely investigated so far, as no commonly agreed definitions and tools were available at E.U. level, and since the sector is essentially of U.S. parentage. E.U. registration for the large CRAs will allow the authorities to check their compliance with E.U. Treaty rules on concerted practices and abuse of dominant position. This may raise some feathers.

The Capco Institute Journal of Financial Transformation


What Reforms for the Credit Rating Industry? A European Perspective

References

Arbak, E. and P. Cinquegrana, 2008, Report of the CEPS-ECMI joint workshop on the reform of credit rating agencies, November (available at www.eurocapitalmarkets.org) Becker, Bo. and T. Milbourn, 2009, Reputation and competition: Evidence from the credit rating industry, Harvard Business School, Working Paper No. 09-051, Cambridge, MA Black, B. S., 2001, The legal and institutional preconditions for strong securities markets, UCLA Law Review, 48, 781-855 Cinquegrana, P., 2009, The reform of the credit rating agencies: a comparative perspective, ECMI Policy Brief, February (www.ceps.eu and www.eurocapitalmarkets.org) Clifford Chance, 2010, Dodd-Frank Wall Street Reform and Consumer Protection Act, Client Briefing Report, July Coffee, J., 2010, Ratings reform: a policy primer on proposed, pending and possible credit rating reforms, paper presented at a meeting of the OECD Competition Committee, 16 June, Paris ECB, 2006, The implementation of monetary policy in the Euro Area, General documentation on Eurosystem monetary policy instruments and procedures, September, Frankfurt ECB, 2008, The implementation of monetary policy in the Euro Area, General documentation on Eurosystem monetary policy instruments and procedures, November, Frankfurt European Commission, 2008, Commission staff working document accompanying the proposal for a regulation of the European Parliament and of the Council on Credit Rating Agencies impact assessment, SEC/2008/2746 final, November European Commission, 2010, Proposal for a regulation of the European Parliament and the Council on amending Regulation (EC) No 1060/2009 on credit rating agencies, June Fitch, 2010, The role of the ECB temporary prop or structural underpinning? Special Report, 11 May Harris, L., 2010, Pay the rating agencies according to results, Financial Times, 4 June Pagano, M. and P. Volpin, 2009, Credit ratings failures: causes and policy options, in Dewatripont, M., X. Freixas, and R. Portes, (eds.), Macroeconomic stability and financial regulation: key issues for the G20, VoxEU, 2 March (http://www.voxeu.org/index. php?q=node/3167) Ponce, J., 2009, The quality of credit ratings: a two-sided market perspective, August (http://www.bcu.gub.uy/autoriza/peiees/jor/2009/iees03j3471009.pdf) Richardson, M. and L. J. White, 2009, The rating agencies: is regulation the answer?, in Acharya, V. V. and M. Richardson, (eds.), Restoring financial stability, New York: Wiley St. Charles, C., 2010, Regulatory imperialism: the worldwide export of European regulatory principles on credit rating agencies, Minnesota Journal of International Law, 19:1, 177-200.

21

PART 1

Money Market Funds and Financial Stability: Comparing Sweden to the U.S. and Iceland
Gudrun Gunnarsdottir Financial Economist, Financial Stability Department, Sveriges Riksbank Maria Strmqvist Quantitative Analyst, Brummer Multi-Strategy1

Abstract
The financial crisis, in particular the collapse of Lehman Brothers, has revealed that money market funds are more risky than had previously been believed. We discuss the importance of money market funds for financial stability and whether situations similar to those during the recent crisis in the U.S. and Icelandic markets could arise in Sweden. We find that there are similarities between the Swedish and Icelandic funds, but few similarities with the U.S. funds. In Sweden, as was the case in Iceland, the assets under management are concentrated in a few funds and the connection to the major banks is strong. However, given the relatively small size of the money market funds in Sweden, we do not find that they, in isolation, are of major systemic importance as a source of funding for the Swedish banks. The funds are more
1 This paper was written when Maria Strmqvist was a senior economist at the Riksbank. We are grateful for useful comments and help with data from Elias Bengtsson, Anders Bjllskog, Heidi Elmr, Johanna Fager Wettergren, David Forsman, Johannes Holmberg, Kerstin Mitlid, Kjell Nordin, Fredrik Pettersson, Anders Rydn, Kristian Tegbring, and Staffan Viotti. The views expressed in this paper are the authors and not necessarily those of the Riksbank. We are responsible for all errors.

likely to have a systemic impact through spill-over effects on the banking system, especially in a market already characterized by high uncertainty and risk aversion. The money market funds are thus more important to certain parts of the financial market, such as the market for corporate commercial paper and covered bonds.

23

As the recent financial crisis has shown, in certain situations money market funds can be considered important for financial stability. These situations are characterized by extensive uncertainty and instability in the markets. The money market funds in both the U.S. and Iceland were severely affected by the recent financial crisis. This paper discusses the importance of money market funds for financial stability and whether situations similar to those during the recent crisis in the U.S. and Icelandic markets could arise in Sweden. Do the Swedish money market funds have significant similarities to the U.S. and Icelandic money market funds? Factors that influence the importance of money market funds, apart from the market situation, include the risk of spill-over effects to the banking system, investor sentiment, and whether the funds are an important source of funding for banks and mortgage institutions. This paper examines the importance of these factors for the Swedish money market funds. Data has been collected from several different sources for 2003 to the third quarter of 2009. Data at the aggregate level has been collected from the Swedish Investment Fund Association (Fondbolagen), Statistics Sweden, and Morningstar. To analyze specific holdings, we examined the portfolios of seven large money market funds. The data on individual funds was collected from the Swedish Financial Supervisory Authority (Finansinspektionen), the funds annual and semi-annual reports, and from the fund companies themselves. The paper focuses on money market funds mainly investing in securities issued in domestic currency. For the Swedish and Icelandic markets, money market funds are defined as short-term bond funds with an average maturity of less than one year. The corresponding definition for U.S. money market funds is 90 days. Differences in definitions will be discussed later in the paper.

fund open to the general public to ever break a buck. The U.S.$64.8 billion fund held U.S.$785 million in Lehman Brothers commercial paper [Waggoner (2009)]. In the end, it was the combination of the holdings, the large redemptions, and the lack of resources from the sponsor (the fund company Reserve) to back the fund that led the funds net asset value to drop to 97 cents [McCabe and Palumbo (2009)]. This event triggered a run on U.S. money market funds, especially funds that invested in non-government securities. Investors moved their money to funds that, for example, only invested in government securities and bank deposits. Institutional investors liquidated much more than retail investors. As an example, institutional investors liquidated 16 percent of their holdings in a couple of days, while individuals liquidated 3 percent at the same time [Baba et al. (2009)]. This had severe financial stability implications, including freezing the commercial paper market. U.S. money market funds held nearly 40 percent of the outstanding volume of U.S. commercial paper in the first half of 2008 [Baba et. al. (2009)]. The U.S. government stepped in and guaranteed U.S. money market mutual funds on 18 September, 2008. In the press release from the U.S. Treasury (2008), the justification for the action was to protect and restore investor confidence and the stability of the global financial system. The money market funds were considered to be of systemic importance, as they have an important role as a savings and investment vehicle, as well as a source of financing for the capital markets.5 The U.S. government believed that the concerns about the net asset value of money market funds falling below U.S.$1 had exacerbated global financial market turmoil, causing a spike in certain short-term interest rates and increased volatility in foreign exchange markets. The event also provoked severe liquidity strains in world markets. European banks, which have experienced a large growth in U.S. dollar assets, were affected when the opportunities for dollar funding were reduced, partly due to the problems with the U.S. money market funds [Baba and Packer (2009)]. In its press release, the U.S. Treasury concluded that actions from the authorities were necessary to reduce the risk of further heightened global instability.6

What happened to the money market funds in the U.S. and Iceland, and why were these funds considered important for financial stability?
A run on U.S. money market funds
The U.S. money market mutual fund market is the largest of its kind in the world (about U.S.$4 trillion). The funds invest in short-term assets and the weighted average maturity of the portfolios of money market funds is restricted to 90 days [Baba et al. (2009)]. The U.S. money market funds
2

2 3

are structured to maintain a stable net asset value of U.S.$1 per share; this is called the Buck system.3 This simplicity is important because there are a lot of transactions in the funds as they are often used as a cash management tool. As the money market funds do not have capital buffers, they instead rely on discretionary financial support from their sponsors whenever the value of a share threatens to fall below U.S.$1. The Buck system had only been broken once in 30 years, until the fall of Lehman Brothers in September 2008.4 On 16 September 2008, the day after Lehman Brothers fall, the Reserve Primary Fund announced a share 24 price for its flagship fund of 97 cents. This was the first money market
6 4 5

The weighted average maturity of Swedish and Icelandic funds is around one year. The U.S. money market funds are categorized by their investment objectives and the type of investors in the fund. For example, they can be divided into prime funds that invest in non-government securities, which can be divided further into institutional or retail prime funds depending on the investors. The Buck system provides convenience and simplicity to investors in terms of tax, accounting, and record keeping. Returns on investments are paid out as dividends with no capital gains or losses to track. A small institutional money market fund, Community Bankers Money Fund, broke the buck in 1994. However, this had no effect on financial stability. The U.S. money market funds were also important for the asset-backed commercial paper market (ABCP) and thus, there was a connection between the funds and the real economy. However, the money market funds did not experience large redemptions during the ABCP crisis that started in mid-2007 (sponsor support played a large role there). New rules from the SEC for the U.S. money market funds were adopted in January 2010. These new rules include new liquidity requirements, tighter constraints on credit quality, new disclosure requirements, and new procedures for the orderly shutdown of money market funds that break the buck [McCabe and Palumbo (2009)].

The Capco Institute Journal of Financial Transformation


Money Market Funds and Financial Stability: Comparing Sweden to the U.S. and Iceland

The Icelandic money market funds crashed with the financial system
In Iceland, the money market funds were not the source of the crisis, but they were severely affected and thus aggravated the crisis. The Icelandic case is interesting for two reasons: firstly, it shows that money market funds can be risky and make large losses; secondly, it points to the risks of excessive connections between the mutual funds and the domestic banks. The largest money market funds in Iceland, in terms of assets under management in Icelandic kronor, were owned by Landsbanki, Kaupthing, and Glitnirs fund companies.7 These banks were taken over by the Icelandic government in October 2008. Around the time of their collapse, the money market funds were closed. When the financial system in Iceland collapsed, it affected all major issuers of fixed income securities in Iceland, financial institutions as well as corporations. New emergency legislation was implemented in Iceland on 6 October, 2008, in which bank deposits were given priority before all
8

The Icelandic money market funds were poorly diversified with substantial linkages to parent banks
The Icelandic money market funds had mainly invested in domestic securities issued by financial institutions and corporations. For example, the money market fund owned by Landsbanki had 60 percent of its invested capital with exposure to financial institutions11 and 40 percent invested in securities issued by corporations at its closing [Sigfusdottir (2008)]. In addition, all the funds had invested a large proportion of their assets in securities linked to the Icelandic banking system, either directly in securities issued by financial institutions, by corporations with ownership stakes in the Icelandic banks, or even the banks major debtors. At the time of closing, the money market fund of Landsbanki had 41 percent and Kaupthing 21 percent in securities connected to their own parent banks [Gunnarsdottir (2008)]. Glitnirs fund, Sjodur 9, had 46 percent in securities connected to Glitnir, according to its last semi-annual report in 2008. In addition, the money market funds also had some deposits with their own bank. For example, in Kaupthings case, deposits amounted to 66 percent of the fund at its closing, a large part of which was held with Kaupthing [SIC (2010)].12 It is, therefore, evident that the Icelandic money market funds formed a source of funding for their parent banks to a certain extent. Nevertheless, the direct access to funding in the money market funds was helpful when foreign funding markets closed for the Icelandic banks in 2008. The three large money market funds amounted to 4.4 billion (ISK 400 billion) at their peak at the end of 2007, an amount equivalent to approximately 30 percent of Icelands GDP in 2007. At the same time, household deposits with the Icelandic banks amounted to 6 billion (ISK 550 billion).13 In fact, many households used money market funds as a substitute for normal bank deposits. Money market funds are often considered to involve very low risk because their returns are stable. However, in Iceland, the money market funds were exposed to large systematic risk because of the small size of the bond market and the concentration of market securities.14 The funds mainly

other claims. Before this new law was implemented, bonds and certificates had the same right to claims as deposits. Thus, the new legislation had a negative impact on the money market funds recovery rate from securities. To protect the investors in money market funds, the government decided that the banks, now government-owned, would themselves resolve the issue, with the interests of their investors as their primary goal.9 The banks then bought back securities from the money market funds for a total of about 552 million (ISK 83 billion) before they paid their investors. According to Morgunbladid, 60 percent of that amount has now been written off [Juliusson (2009)].10 The money market funds returned between 69 and 85 percent of their value to their investors after the securities had been bought from the funds [Juliusson (2009)]. Securities issued by financial institutions and corporations accounted for the majority of losses [Sigfusdottir (2008)], despite the fact that securities seem to have been bought back at higher prices than they were ultimately worth, given the write-offs. Two out of three money market funds did not report negative returns before they were closed and, subsequently, reported losses of between 15 to 31 percent. The exception was Glitnir, where the money market funds returns decreased when the bank received emergency funding from the government a few days before the system collapse. The fund had to be closed for three days due to large outflows following negative media attention and problems encountered by corporations linked to Glitnir. The fund then opened again for a short period until the bank was fully taken over by the government. The other money market funds also experienced outflows in 2008, although the amounts of these varied between funds. Outflows were especially large around the time that Glitnir received emergency funding from the government [Sigfusdottir (2008)].

7 8 9

10 11

12

13 14

The three parent banks accounted for 90 percent of the total financial system in Iceland according to Sedlabankis 2008 Stability Report. Icelandic law number 125/2008: http://www.althingi.is/lagas/nuna/2008125.html. A great deal of media attention focused on the money market funds around the time of their closure, when it was evident that losses would be made. Many household investors had not fully understood the possible risks involved in investing in those funds, as they had been marketed almost as a substitute to bank accounts with similar risk [SIC (2010)]. The buyback was based on the expected recovery rate of securities, although there was great uncertainty at the time. In Landsbankis data the exposure to financial institutions included both issued securities and deposits. A graph of the development of the fund indicates that deposits seem to have been around half of the exposure to financial institutions [Sigfusdottir (2008)]. If the emergency law making deposits priority claims had not been implemented, the losses of Kaupthings money market fund investors (for example) would have been a lot larger, as a large part of its portfolio was in deposits. According to data from the Central Bank of Iceland. 35 percent could be invested in one counterparty, and then other counterparties had to count for less than 20 percent [Kaupthing (2008)].

25

invested in securities issued in Icelandic kronor. Investment in government securities was minimal and the reason given for this was the small supply of government bonds available in the market. In normal times, funds like these should be diversified enough to be able to handle losses stemming from one issuer. However, when the whole financial system collapses, the situation is very different. In such a situation, not even a high degree of diversification will help. Even though the money market funds had invested in securities with the highest credit rating available on the Icelandic market, they made large losses. Despite that fact, it can be argued that the diversification in the funds was not satisfactory and that substantial linkage to the money market funds parent banks created large risks.15

be important. According to Davis and Stein (2001), households are more likely to have more diverse views than groups of institutional investors. This is supported by the fact that the largest outflows from the U.S. money market funds came from institutional investors.

Potentially large market impact from fire sales


The liquidity of financial securities (for example covered bonds) and corporate securities decreased during the crisis, especially around the collapse of Lehman Brothers. This lower liquidity was manifested in higher bid-ask spreads, lower prices and turnover. It was, in fact, even difficult to get a price for financial and corporate securities that, in normal times, had been liquid in the market.16 Investors risk aversion increased sharply. The liquidity problems for money market funds were evident both in Iceland and the U.S. If funds are forced to liquidate securities in such a market, unless the managers are willing to realize losses, it is likely that all funds would have to sell the most liquid and (in relative terms) most fairly priced securities, such as, for example, government bonds.

How are money market funds important for financial stability?


The U.S. and Icelandic crises concerning money market funds point to some explicit ways in which money market funds are important for financial stability. These are specified in more detail in this section and then investigated further in terms of the Swedish market in the following section.

Spill-over effects
In the U.S. situation, the authorities explicitly stated that one of the main reasons for guaranteeing the value of the money market funds after the collapse of Lehman Brothers was spill-over effects resulting in further heightened global instability. In Iceland, it is likely that the government owned banks purchases of assets were performed in order to avoid further financial instability and decrease the losses of investors. In order to minimize panic in the market, at the time of the system failure, the government of Iceland emphasized that all deposits would be guaranteed. Spill-over effects from problems in money market funds to the banking system are more likely if there is a high concentration in a small number of funds and if the fund market is dominated by the major banks mutual fund companies, which was the case in Iceland.

Source of funding for systemically important financial institutions and markets


Money market funds can be a source of funding for banks. This was particularly evident in Iceland, where the money market funds had invested a large share of their capital in securities issued by the domestic banks, and in particular, securities linked to the bank that owned the fund. If money market funds buy a large share of the short-term bonds and commercial paper issued by banks and other systemically important institutions, they become an important source of funding for these institutions. The continuous and consistent availability of short-term funding for these institutions is essential to financial stability. If the money market funds are the dominating investors in a specific submarket of the financial market, their problems may have negative consequences that may spread through the financial system as well as to the real economy in the long run. This is illustrated by the U.S. money market funds, which were essential for the commercial paper market. When the U.S. money market funds incurred losses from defaults in connection with the Lehman Brothers collapse and were not able to continue investing to the same degree

Investors degree of sophistication affects flows


Investors expectations can have an effect on fund flows in a financial crisis. Investors include both retail investors (households) and institutional investors. If investors believe that money market funds are liquid and have low risk, their reactions may be stronger in the event of problems than would otherwise be the case. According to Henriques (2008), retail investors in the U.S. considered money market funds to be as safe as bank savings accounts. This also appeared to be the case in Iceland. The main outflow from Icelandic money market funds occurred after negative media attention focused on Glitnirs money market fund. Sirri and Tufano (1998) find that U.S. mutual fund flows are directly related to the current media attention received by the funds. Klibanoff et al. (1998) also find that investors react more strongly to headlines in the newspapers. Consequently, extensive media coverage of problems in mutual funds 26 could have a major impact on fund flows. Investor sentiment can also

15 Althingis Special Investigation Commissions report about the collapse of the Icelandic banks includes a chapter on the Icelandic money market funds. The main conclusions of the Committee are that the funds were excessively linked to their parent companies in terms of investment selection and that the separation between the fund company and parent bank was unsatisfactory. The money market funds grew very fast and became too large for the Icelandic securities market since the supply of solid and liquid securities was limited. The interests of the parent bank seem to have been prioritized ahead of the interests of the investors. In some cases, the investors were not provided with reliable information about the standing of their investments, which were frequently worse than the returns of the funds implied. Outflows from the funds in 2008 were also investigated and the Commission has reason to believe that some investors (individuals and corporations linked to the banks) had better information than others. This issue has been sent to the Public Prosecutor in Iceland and the Financial Supervisory Authority for further investigation [SIC (2010)]. 16 From conversations with fund managers of the largest Swedish money market funds.

The Capco Institute Journal of Financial Transformation


Money Market Funds and Financial Stability: Comparing Sweden to the U.S. and Iceland

All Number of funds AUM (million) Average AUM Median AUM Max AUM Min AUM Market share 45 20037 445 158 2781 12

SHB 2 2107 1053 1053 1512 595 11%

Nordea 7 5928 847 287 2781 27 30%

SEB 5 3322 664 917 1224 35 17%

Swedbank 5 4776 955 969 1634 177 24%

Others 26 3903 150 73 814 12 19%

(30 September 2009) Fund name Nordea Sekura Swedbank Robur Svensk Likviditetsfond Swedbank Robur Penningmarknadsfond Handelsbanken Lux Kortrntefond Nordea Institutionell Penningmarknadsfond Nordea Likviditetsinvest SEB Penningmarknadsfond SEK

AUM (million) 2676 1714 1601 1418 1393 1198 1019 11019

% market AUM 13% 9% 8% 7% 7% 6% 5% 55%

The table shows summary statistics for Swedish money market funds investing in SEK. The first column is for all funds in the sample, the next four columns represent funds owned by the four largest banks fund companies and the last column all other mutual fund companies. The data was collected from Morningstar on 30 September 2009.

Total

The data has been collected from the Swedish Financial Supervisory Authority, the funds annual and semi-annual reports and directly from the fund companies.

Table 1 Summary statistics

Table 2 Seven large Swedish money market funds

as before because of large redemptions, the commercial paper market froze. If this had continued for a more extensive period, it could have had a negative effect on the financial and corporate sector and, in the end, the real economy. In addition, there were wide-ranging spill-over effects from the U.S. market to the global financial markets.

million (SEK 100 million) under management. Of the 45 funds, 19 funds are managed by the four largest Swedish banks mutual fund companies (Svenska Handelsbanken (SHB), SEB, Nordea and Swedbank). Even though these funds account for 42 percent of the total number of funds, the market share of assets under management is equivalent to 81 percent. Nordea has the largest market share (30 percent) followed by Swedbank (24 percent). SEB and SHB have market shares of 17 and 11 percent, respectively. Table 2 presents the assets under management and the percentage of total market assets under management for seven large money market funds investing in Swedish securities at the end of the third quarter of 2009. The largest fund is Nordea Sekura, with a market share of 13 percent, followed by Swedbank Robur Svensk Likviditetsfond (market share of 9 percent). The seven funds have a total market share of 55 percent. The Swedish market for money market funds is thus highly concentrated, with seven of the largest funds having more than half of the assets under management. That implies that it could be enough for one or a small number of money market funds to run into trouble for there to be larger implications for the financial system. Financial stability could be affected, especially if the general market situation at the time were to be characterized by large uncertainty, as was shown by the Icelandic example. The fact that the market is dominated by the four largest banks in terms of assets under management also has implications for financial stability. These implications are mainly spill-over effects from problems with the money market funds to the banking sector. For example, there is a risk of a negative reputation effect with lower trust in the banks as a result. As an example, according to Svenska Dagbladet (2009), Swedbank Robur compensated mutual fund investors in Estonia to protect the reputation of the mutual fund company as well as the parent bank. If investors lose

Characteristics of Swedish money market funds that may affect their influence on financial stability
Certain factors increase the risk of spill-over effects from the money market funds to the rest of the financial system (and especially the major banks). Here we look at three factors applying to the Swedish money market funds firstly, whether the money market funds have substantial amounts of assets under management; secondly, whether the assets under management are concentrated in a few large funds; and, thirdly, whether there is any significant connection with the systemically important banks.

The size of the Swedish market for money market funds is relatively small
Table 1 presents summary statistics for the Swedish market, both for money market funds registered in Sweden and abroad. According to Morningstar, there are 45 mutual funds classified as money market funds investing in securities issued in SEK. The total assets under management by these funds were about 20 billion (SEK 204 billion) at the end of the third quarter of 2009. According to Fondbolagens data, the money market funds constitute about 14 percent of the total Swedish mutual fund market, while long-term bond funds constitute about 11 percent.17

The Swedish market is highly concentrated and dominated by the major banks
The average fund has 0.4 billion (SEK 4.5 billion) in assets under management, but the median is only 0.2 billion (SEK 1.6 billion). This implies that there are several large funds in the sample, as illustrated by the maximum levels shown in Table 1. The largest fund has 2.8 billion (SEK 28 billion) in assets under management. The smallest fund has only 12

17 To compare, the European market for money market funds has about 1,000 billion in assets under management, which is 21 percent of the total European fund market [EFAMA (2010)]. The numbers are for UCITS funds.

27

their trust in a certain banks funds, the result could be that they withdraw their money, not only from the funds, but also from their deposits in the bank. Bank deposits are an important source of funding for the Swedish banks [Sveriges Riksbank (2009)].

100%

3% 23% 6%

4% 24% 6%

6% 24% 5%

4% 22% 6%

4% 22% 6%

3% 25% 5%

3% 23% 5%

2% 23% 5%

2% 24% 4%

3% 23% 5%

3% 21% 5%

50%

Retail investors are the largest investor group


Knowledge of the investors in mutual funds is of interest for financial stability given that this will provide information on who would be affected by a decrease in value of mutual funds. In addition, different types of investors may induce financial instability through their behavior. Households are the largest investor group in Swedish money market funds, with a share of 68 percent18 on average between 2007 and 2009 (Figure 1). Swedish corporations have kept their proportion to around 23 percent. The fact that household investors constitute a major component of the investors in these mutual funds indicates that the average investor is relatively unsophisticated. As stated before, this has positive and negative implications from a financial stability perspective. Retail investors are more sensitive to media attention about the funds, but they do not tend to react to market events as strongly, as quickly, or with as much coordination as institutional investors. In discussions, various Swedish fund managers stated that, during the crisis, it was mainly institutional investors that asked questions about the implications of the market events for the money market funds. This could be due to limited media attention concerning the issue in Sweden, with less information thus reaching the retail investors. However, the situation could be different in a future crisis.
0% 50% 100% 0%

67%

66%

66%

67%

68%

67%

69%

70%

70%

70%

71%

Q1 2007

Q2 2007

Q3 2007

Q4 2007

Q1 2008

Q2 2008

Q3 2008

Q4 2008

Q1 2009

Q2 2009

Q3 2009

Other

Swedish corporations

Non pro t inst. for households

Swedish households

Figure 1 Investor groups (aggregate data)

12% 18%

12%

11%

9%

9% 20%

6%

5% 25%

5% 18%

4% 23%

5% 25%

5% 23%

22%

25%

23%

26%

50% 46% 45% 45% 50% 54% 55% 51% 54% 53% 59%

24%

22%

19% Q3 2007

18% Q4 2007

17% Q1 2008

13% Q2 2008

19% Q3 2008

27%

18% Q1 2009

18% Q2 2009

13% Q3 2009

Q1 2007

Q2 2007

Q4 2008

Rest of the world

Other domestic

Financial institutions

Central government

Figure 2 Investments (aggregate data)

issued securities in Swedish kronor because of higher uncertainty about

Securities issued by financial institutions is the largest component in the portfolios


From a financial stability perspective, it is interesting to know what securities these funds invest in, what they would potentially have to sell in the event of major redemptions, and which submarkets might be affected. Data at the aggregate level is available from Statistics Sweden.19 Taking a look at the components of the total investment of money market funds in Figure 2, bonds issued by financial institutions are the largest component, followed by securities issued by corporations. The share of bonds issued by financial institutions has increased from 46 percent in the first quarter of 2007 to 59 percent in the third quarter of 2009. This increase is likely to depend on three factors: a limited supply of available securities, the relatively higher yield they give compared to T-bills, and the reduction of risk due to the government guarantee programs that came into effect late in 2008. However, this increase may have consequences on the systematic risk taken by funds due to lower diversification between asset classes. Investment in foreign securities (issued in Swedish kronor) has decreased in every period, from a level of about 12 percent in the first quarter of 2007 to 5 percent in the second quarter of 2009, indicating a stronger home 28 bias. It is likely that mutual funds decreased their investments in foreign-

their issuers and poor performance by these securities during the crisis. In discussions, fund managers stated that the holding of geographicallydiversified portfolios had a negative effect on funds ability to manage the crisis.

A closer investigation of the holdings of seven large money market funds


To get a better understanding of the investments undertaken by Swedish money market funds, we look at the holdings of seven large funds at three points in time: December 2007, December 2008, and June 2009. Table 3 shows the average exposure to asset classes (using nominal values) for the seven funds for the three periods in time. The assets are divided into covered bonds, bonds that are guaranteed by the government, government bonds, general bonds and notes, subordinated bonds, T-bills, commercial paper, and cash.20 The fact that Swedish money market funds can invest in

18 This includes direct investments by households, individual pension saving (IPS), premium pension savings (PPM), and unit linked investments (fondfrskring). 19 The data only includes funds registered in Sweden. 20 The asset classes are categorized by Bloomberg with some revisions. Under general bonds and notes, we put Bloomberg categories bonds, notes, senior unsecured, senior notes, unsecured, unsubordinated, and company guarantee. Covered bonds, government guaranteed bonds, and general bonds and notes can all be FRNs.

The Capco Institute Journal of Financial Transformation


Money Market Funds and Financial Stability: Comparing Sweden to the U.S. and Iceland

Dec-07 Covered bonds Government guarantee Government bonds Bonds/notes Subordinated bonds T-bills Commercial paper Cash 40% 1% 0% 40% 2% 4% 12% 2%

Dec-08 35% 1% 2% 33% 2% 4% 20% 3%

June-09 45% 7% 3% 25% 2% 2% 13% 2%

portfolios. For example, for the largest fund, Nordea Sekura, around 74 percent of the total assets under management were invested in FRNs in 2009. The interest rate on FRNs changes with short intervals (for example every three months). Consequently, the interest rate risk is still low but, since there is a credit risk premium, the return is higher than for securities with shorter maturities. The longer maturity length of FRNs does not affect the portfolios sensitivity to interest rate changes to any great extent. However, having a portfolio dominated by FRNs may have implications for the liquidity risk of the portfolio, especially if the credit risk increases considerably in a financial crisis.

Nominal values are used. Data is collected from Bloomberg

Table 3 Average exposures to asset classes (7 funds)


Dec-07 Financial (including mortgage) Government Corporations Cash (Mortgage) 76% 4% 18% 2% 47% Dec-08 74% 6% 17% 3% 47% June-09 79% 4% 15% 2% 55%

No large bias towards investing in the parent banks securities


Table 4 sorts the holdings of the seven largest funds into exposure to issuers. The issuers are divided into financial institutions (including mortgage institutions and real estate companies), government, corporations, and cash. Confirming the findings from aggregate data, financial institutions issue the vast majority of the securities in the portfolios, between 74 and 79 percent over time. The share of securities from mortgage institutions (including commercial paper, covered and regular bonds, and notes) has increased during the period.23 At the end of 2007, 47 percent of the portfolio was invested in securities from mortgage institutions. In June 2009, the corresponding figure was 55 percent. Table A1 in the Appendix displays the five largest exposures by issuer in each of the

Nominal values are used. Data is collected from Bloomberg

Table 4 Average exposures to type of issuer (7 funds)

securities with longer maturity (unlike U.S. funds) comes from the fact that these can have a weighted average maturity of up to one year.

seven funds at three points in time. In general, the majority of the largest exposures are to mortgage institutions, which is consistent with previous findings. All funds but one (SHB Kortrntefond) have one of their top five exposures, at all three points in time, to the mortgage institution Stadshypotek, a subsidiary of Svenska Handelsbanken. This is due to the fact that Stadshypotek is one of the largest issuers on the Swedish covered bond market. The fact that Stadshypotek remains the largest exposure from 2007 to 2009 indicates that it is considered a good investment even in turbulent times, given the flight to quality that normally occurs during a financial crisis. The largest exposure ranges from 8 to 24 percent, indicating some differences in counterparty diversification by the funds. Also, there is no large bias towards investing in the parent banks securities, as in the Icelandic funds. The average exposure to securities issued by the parent bank is 11 percent for the Swedish funds (the corresponding figure for the Icelandic banks is approximately 36 percent).24

The share of covered bonds increased during the crisis


Covered bonds and general bonds and notes are the largest asset classes in the portfolios. At the end of 2007, they constituted, on average, 40 percent of the portfolio each. In June 2009, the share invested in bonds and notes had decreased, while the share invested in covered bonds had increased to 45 percent. This could be interpreted as the result of managers decreasing the risk in their portfolios during the financial crisis by increasing the proportion of covered bonds to regular bonds. It could also partly be a result of the issue, by the banks, of large quantities of covered bonds during the period. Government bonds increased from almost nothing to 3 percent, and bonds guaranteed by the government increased from 1 to 7 percent in the same period, again indicating increased risk aversion. In addition, commercial paper increased from 12 percent in 2007 to 20 percent in 2008. On the other hand, according to fund managers, the commercial paper market closed for a period of time after the collapse of Lehman Brothers. It became difficult to sell commercial paper in the secondary market, which meant that these papers became less liquid.21 This, in turn, made it more difficult for commercial paper issuers to get buyers for their issuance.

A large share of the securities are floating rate notes


Floating rate notes (FRNs)22 constitute a large part of the bonds in the

21 During this period all securities with credit risk became more difficult to trade. The commercial paper market is one example (especially issuance by financial institutions and corporations). Bonds issued by financial institutions and corporations and structured products are other examples. 22 FRNs are issued by all types of institutions and can be covered bonds, government guaranteed bonds, general bonds, and notes, etc. 23 The mortgage institutions are the Swedish hypotek. 24 This is without deposits, which would increase the exposure substantially for the Icelandic funds, given large deposit position by some.

29

Cross-investments lower in the crisis


Table A2 in the Appendix shows, in detail, the cross-investments between funds. That is, the proportion of a funds assets invested in exactly the same securities as another fund. If the funds largely invest in the same securities, this increases the systematic risks and the potential market impact in the event that the funds should need to liquidate securities. Some cross-investments are to be expected, given the relatively small size of the Swedish market and the limited selection of securities. This limitation arises from the fact that the funds invest in securities issued in Swedish kronor. As Nordea Sekura is the largest fund, it could be expected that the other funds would have the highest percentages of cross-investment with this fund. However, the cross-investments with Sekura were much higher in 2007 (73 percent on average) than in 2008 (37 percent) and 2009 (47 percent). We observe a similar trend between the other funds.

Swedish money market funds are not an important source of funding for major banks but more important for certain markets
For the funds to be an important source of funding for the Swedish banks, a substantial part of the banks outstanding bonds should be held by the money market funds. However, since that information cannot easily be obtained from the data, we have made a rough estimate. According to Statistics Sweden, Swedish financial institutions (banks and mortgage institutions, etc.) issued securities in Swedish kronor in the amount of 138 billion (SEK 1,509 billion) in the second quarter of 2009 (both longterm and short-term).26 This constituted 24 percent of the banks total interest bearing funding [Blomberg (2009)]. Given that the money market funds at that time had total assets under management of 20 billion and, on average, 79 percent was invested in securities issued by financial institutions, only a small part of the institutions securities funding would potentially come from money market funds. Additionally, according to Statistics Swedens financial market statistics, deposits and borrowings from Swedish non-financial institutions amounted to 164 billion (SEK 1,792 billion) in the second quarter of 2009, accounting for 29 percent of the banks total interest bearing funding [Blomberg (2009)]. Out of that 164 billion, households deposits amounted to 43 percent of this funding. Consequently, if household investors lose their trust in a certain banks funds and withdraw their money, not only from the funds but also from their deposits in the bank, this may have an effect on the bank in question. However, if we look at the covered bond market in particular, we find that, on average, the largest funds invest 45 percent of their portfolios in covered bonds. Assuming that, on average, the share is the same for all Swedish money market funds, the funds would have around 8 percent of all outstanding covered bonds denominated in SEK.27 The Riksbank estimates the Swedish market for corporate commercial paper to be worth about 10 billion. According to the same assumption, the money market funds would thus have about 12 percent of the outstanding commercial paper issued by corporations. Although this is not a huge figure, it is not entirely insignificant. It is not only the size of the invested capital that matters but also the mobility of the capital. The experience of the Swedish covered bond market in 2008 shows that quick withdrawals of capital can have a substantial effect on the stability of the market. After Lehman Brothers collapse,

Swedish funds do not hold a large share of the outstanding amount of a security
Another point to consider is how much the funds own of each security compared to the outstanding amount. For example, if they hold a large share of the outstanding amount of one security, this will affect liquidity, making this security harder to sell in the market, should the need arise. Making a rough estimate of the funds ownership in bonds, compared to their outstanding amounts in June 2009 according to Bloomberg, we see that the weighted average of ownership in each bond is around 15 percent. This indicates that the Swedish funds, in general, do not own large
25

amounts in single bonds, as was the case in Iceland, where, in some cases, the funds even held the full amount, which severely affected the liquidity of the bonds [SIC (2010)].

Greater home bias could indicate higher risks in the future


Although lower cross-investments between the funds are positive for financial stability and the potential market impact of money market funds, the decrease of diversification in the portfolios during the financial crisis can have negative effects in the long run. The share of covered bonds has increased, making the funds more dependent on this market. Also, there is less diversification among issuers and the home bias has increased. Normally, it is always better to have a more diversified portfolio. However, in the special case of the recent financial crisis, higher diversification was negative from the fund managers perspective. In a financial crisis, the correlation between assets increases, which decreases the positive effects of diversification. Also, Adrian and Shin (2008) put forward that, when there is a high degree of diversification in the financial system, a small shock can be amplified through market prices. The increased home bias in the managers portfolios is a natural development, as the funds that diversified their portfolio with foreign securities were punished in the crisis. However, from a financial stability perspective, a strong home bias could indicate a higher risk for the Swedish financial system if problems 30 with the domestic markets occur in the future.

25 Note that this is only a rough estimate as information on certain securities could not be found in Bloomberg (although the majority could be found). In addition, commercial paper is not included, as Bloomberg does not provide information on those (commercial paper counted for 13% of the portfolios in June 2009). 26 This data does not include subsidiaries. 27 The estimate of the size of the covered bond market comes from the Association of Swedish Covered Bonds homepage.

The Capco Institute Journal of Financial Transformation


Money Market Funds and Financial Stability: Comparing Sweden to the U.S. and Iceland

many foreign investors wanted to sell their holdings of Swedish covered bonds quickly. The market makers had problems handling the large volumes of selling orders, which then disrupted the market. The problems in the covered bond market were reduced in mid-September 2008, when the Swedish National Debt Office started to issue large extra volumes of T-bills to meet heightened demand for these short-term securities. The money from these extra auctions was placed in reverse repos with covered bonds as collateral. In 2008, foreign investors decreased their holdings of Swedish covered bonds by around 11 billion compared to 2007. Although that figure is only about 7 percent of the covered bond market, the outflow had a substantial impact due to its rapid pace.

one dollar, although there is no official guarantee that the fund shares should always be above the dollar.30 The Swedish money market funds, on the other hand, can both increase and decrease in value, a fact known to most investors, even though, in normal times, the funds have shown stable positive returns. Swedish funds are more sensitive to changes in interest rates than U.S. funds, given that they can hold securities with longer maturities.

Several similarities with Icelandic funds


If, on one hand, there are few similarities between Swedish and U.S. funds, there are, on the other, several similarities between Swedish and Icelandic funds. The Icelandic money market funds were similar to the Swedish money market funds in terms of investments, returns, and the purpose they serve for investors. As in Sweden, there is currently no clear definition of money market funds in Iceland. It is up to the funds to define their average maturity and in both countries money market funds are usually defined as having an average maturity of about a year or less. Although there is no exact data on the investors in the Icelandic money market funds, large proportions were households, as in Sweden. Also, the Icelandic funds were not a major cash management tool for corporations, unlike the funds in the U.S. In Sweden and Iceland, the supply of government bonds (i.e., bonds issued in domestic currency) was small, so the funds consisted mostly of securities issued by financial institutions and corporations, although cash increased in the Icelandic funds in the period before the system collapse. On the other hand, the Swedish bond market, of which covered bonds form a large part, is larger than the Icelandic bond market. Consequently, the diversification opportunities are better in the Swedish market, although both markets are still small compared to the U.S. market. The investments made by the Swedish money market funds are also closely linked by ownership to the major banks in Sweden, a situation similar to that in Iceland. However, Swedish funds do not have the same strong bias towards investing in securities issued by the parent bank as the Icelandic funds did. Another important factor to consider is that Sweden is a larger country than Iceland and its banking sector is not as big in terms of GDP as the Icelandic banking sector was before the collapse. Also, largely due to the careful regulation of financial markets (including money market funds) and the experience of the domestic banking crisis in the 1990s, Sweden was better prepared for the crisis than Iceland.

How similar is the Swedish situation to the U.S. and Icelandic situations?
Few similarities with the U.S. market
U.S. money market funds are very different from their Swedish peers. For example, as previously mentioned, the weighted average maturity of the portfolios of U.S. money market funds is restricted to 90 days. In Sweden, there is no set rule concerning the weighted average maturity of the portfolio. Statistics Swedens definition of money market funds is that the weighted average maturity is 397 days or less. Consequently, it is not
28

easy to compare Swedish and U.S. funds directly. Constraining the maturity of the money market funds should have a positive effect on financial stability. Given the fact that Swedish money market funds can invest in both commercial paper and long-term bonds (like covered bonds), they can potentially affect both markets if compelled to sell securities. The problems in the U.S. funds mainly affected the money markets. Concerning the funds potential market impact, money market funds account for 30 percent of the U.S. fund market, compared to around 15 percent in Sweden. Institutional owners play a large role in the U.S. money market funds.29 The Investment Company Institute in the U.S. estimates that around 80 percent of U.S. companies use money market funds for their cash management. There are no corresponding figures for Sweden, but only about 23 per cent of the assets of Swedish money market funds are held by corporations. The money market funds in the U.S. were also important for the assetbacked commercial paper market, and, thus, problems with the money market funds had direct implications for the real economy. In Sweden, the money market funds have not invested in structured products, but the covered bonds are linked to the Swedish housing market. Because the reporting of earnings in Swedish mutual funds is different from the Buck system, it is likely that, when funds show a negative performance, this will have more severe consequences in the U.S. than in Sweden. Fund sponsors in the U.S. have provided financial support when the market value of a share threatened to fall substantially below

28 According to email correspondence with Johannes Holmberg, Statistics Sweden. 29 According to email correspondence with Fredrik Pettersson, Fondbolagen. 30 Given the short maturity of U.S. money market funds (90 days), the volatility in the funds is low and thus, in most times the support does not involve much risk for the sponsors.

31

MEUR 1.200 1.000 800 600 400 200 0 200 400 600 800 1.000 2007 2008 2009 New ows Repo rate

5,0 4,5 4,0 3,5 3,0 2,5 2,0 1,5 1,0 0,5 0,0

How did the recent financial crisis affect Swedish money market funds?
No run on Swedish money market funds
Figure 3 shows the monthly net capital flows to money market funds from 2007 to 2009. The figure also plots the repo rate (monthly averages) as an indication of the general level of interest rates. The largest inflow into money market funds was in August 2007, amounting to 918 million (SEK 7.4 billion). In that month, there was substantial outflow in equity funds. This is directly linked to the beginning of the subprime crisis, a liquidity crisis that turned into a long period of recession. In a financial crisis, investors want more liquid and less risky investment portfolios, thus turning to money market funds. However, money market funds turned out to be more risky than previously thought. The Lehman Brothers collapse completely changed the risk tendencies in the market. Swedish money market funds did not experience runs in the period after Lehman Brothers collapse. However, sales and redemptions in the funds increased rapidly, even though net capital flows show inflows. This caused a lot of stress on the funds, especially on those (few) that had securities issued

Source: Fondbolagen and the Riksbank.

Figure 3 Monthly net flows

14 Stadshypotek 6 12-16-2009 (6 yr) 12 10 8 6 4 2 0


8 b08 p08 ok t-0 8 no v08 b09 9 ap de ap se au m m au se m m p09 ok t-0 9 n09 r08 n08 r09 08 08 g08 n09 -0 8 9 aj ar ar ju aj fe ja fe ja ju ju ju g09 l-0 -0 9 8 ncl-0 -0 -0

by Lehman Brothers or other U.S. financial institutions. These securities were probably held by some managers because they had a relatively high credit rating as well as yield before the failure and because the managers found it unlikely that the authorities would let the investment bank fail. However, this assumption turned out to be incorrect.

Lower liquidity in the Swedish bond market


In addition, liquidity disappeared in the Swedish bond market for a few days after Lehman Brothers collapse. Consistent with this is the extreme increase in bid-ask spreads during those days for covered bonds in the Swedish market (illustrated in Figure 4 by data for one large issuance of a benchmark covered bond). Liquidity is crucial for money market funds, given that they have to be able to pay out redemptions on the same day. However, according to the statistics collected by the Riksbank, there was still some turnover in the Swedish covered bond market, indicating that there were investors willing (or forced) to trade during these days of acute stress (Figure 5). In a situation with large redemptions and low liquidity in the markets, money market funds can use the repo market to access cash. The securities they invest in are commonly used in the repo market. It was, therefore, important that the repo market in Sweden continued to function throughout the financial crisis, although it was more difficult to use collateral with higher risk in repo agreements. Fortunately, the Swedish mutual money market funds were able to handle the redemptions during the most acute phase of the crisis. Although there was some media
8 8 8 8 8 8 8 8 8 8 8 8 8 8 08 08 08 08 08 08 08 08 08 08 08 08 08 09 09 09 09

Source: Bloomberg

Figure 4 Bid-ask spread for a Swedish covered bond

16.000 14.000 12.000 10.000 8.000 6.000 4.000 2.000 /0 /0 /0 /0 ct /0 /0 /0 /0 l/0 l/0 l/0 l/0 l/0 /0 c/ c/ c/ c/ g/ g/ g/ g/ p/ p/ p/ p/ c/ n/ n/ n/ O ct O ct O ct O ct ov ov ov Ju Ju Ju Ju Ju ov De De De De De Ja Ja Ja Au Au Au Au Se Se Se Se N N N

2008

2009

attention31 regarding the situation after Lehman Brothers collapse, the outflows stayed at moderate levels.
31 See, for example, e24 (2008).

Source: Riksbank. MEUR.

32

Figure 5 Turnover covered bonds

Ja

n/

The Capco Institute Journal of Financial Transformation


Money Market Funds and Financial Stability: Comparing Sweden to the U.S. and Iceland

Conclusion
We find that there are similarities between the Swedish and Icelandic money market funds, but few similarities with the U.S. funds.

Future regulations may have an impact


In the future, the new proposed regulations for banks in Europe may affect the Swedish money market funds. If banks are required to extend their funding, focusing on issuing securities with longer maturities, this could mean that the market for money market securities will decline. This would have an adverse effect on the investment opportunities for money market funds.

Lack of diversification creates risks


In Sweden, money market funds invest, on average, almost 60 percent of their capital in securities issued by financial institutions (the corresponding number for the seven large funds is 79 percent). This lack of diversification may have consequences on the systematic risk taken by funds, as shown by the Icelandic experience. Although lower cross-investments between the funds are positive from the standpoint of financial stability and the potential market impact of money market funds, the decrease of diversification in the portfolios during the financial crisis could have a negative effect in the long run. The share of covered bonds has increased, making the funds more dependent on this market. Additionally, the home bias has increased. Although lower diversification might have had a positive effect in this crisis, given that there were fewer problems on the Swedish market, compared to, for example, the U.S., lower diversification implies more systematic risk in the Swedish money market funds. Consequently, if a problem were to arise in Sweden, this would have a greater impact on the funds.

References
Adrian, T. and Shin, H. S., 2008, Liquidity and financial contagion, Banque de France, Financial Stability Review Special issue on liquidity, no. 11, February Baba, N., R. McCauley, and S. Ramaswamy, 2009, U.S. dollar money market funds and nonU.S. banks, BIS Quarterly Review, March Blomberg, G., 2009, Funding of the banks during the financial crisis, Sveriges Riksbanks Economic Commentaries, No. 11. Davis, E. P. and B. Stein, 2001, U.S. mutual funds and financial stability, Institutional Investor

Swedish funds were able to handle the effects from Lehman Brothers collapse
Liquidity disappeared in the bond and money markets after Lehman Brothers collapse. Liquidity is crucial for money market funds, given that they have to be able to pay out redemptions on the same day. It was, therefore, important that the repo market in Sweden continued to function throughout the financial crisis. Fortunately, the Swedish mutual money market funds were able to handle the redemptions during the most acute phase of the crisis. Although there was some media attention in Sweden regarding the situation following Lehman Brothers collapse, the outflows stayed at moderate levels.

E24, 2008, Lehman-kollapsen drabbar skra investeringar 16 September EFAMA, 2010, Trends in the European investment fund industry in the fourth quarter of 2009 and results for the full year 2009, Quarterly Statistics Release, no.40, March Gunnarsdottir, G., 2008, Fjarfestu i tengdum felogum, Morgunbladid, 17 December Henriques, D. B., 2008, Treasury to guarantee money market funds, New York Times, 20 September Juliusson, T., 2009, Reidufe fyrir skuldabref, Morgunbladid, 8 October Kaupthing, 2008, Uppgjor Kaupthings peningamarkadssjods, Kaupthing, December Klibanoff, P., O. Lamont, and T. A. Wizman, 1998, Investor reaction to salient news in closedend country funds, Journal of Finance, 53:2, 673-699 McCabe, P. and M. Palumbo, 2009, A case study of financial stability analysis and policy reform: The run on U.S. money market funds in September 2008, presentation at the Federal Reserve Board, January 29 Sigfusdottir, E., 2008, Opid bref til hlutdeildarskirteinishafa, Landsbanki, 10 December Sirri, E. R. and P. Tufano, 1998 Costly search and mutual fund flows, Journal of Finance, 53:5, 1589-1622 Special Investigation Commission (SIC), 2010, Factors preceding and causing the collapse of the Icelandic banks in 2008 and related events, 12 April Svenska Dagbladet, 2009, Ny kalldusch fr Swedbank, 16 October Sveriges Riksbank, 2009, Financial Stability Report, 2009:2 U.S. Department of the Treasury, 2008, Treasury announces guarantee program for money market funds, press release, 18 September Waggoner, J., 2009, Money market mutual funds remain at risk of breaking the buck, USA Today, 17 September

Funds can have a systemic impact through spill-over effects


Investigating the risks associated with the Swedish money market funds, we do not find that the funds, in isolation, constitute a large systemic risk. However, the funds are large enough and connected enough to the financial system to be able to aggravate an already vulnerable situation. This was the case in both Iceland and the U.S. Given the relative size of the money market funds in Sweden, we find it unlikely that they are of systemic importance as a source of funding for the Swedish banks. The funds are more likely to have a systemic impact through spill-over effects on the banking system, especially in a market already characterized by high uncertainty and risk aversion. The money market funds are then more important for some parts of the financial markets, such as the markets for corporate commercial paper and covered bonds. 33

Appendix
December 2007 Fund Nordea Sekura Issuer Stadshypotek Spintab Landshypotek Nordea Hypotek Nordea Nordea Hypotek Stadshypotek Swedbank Hypotek Swedbank SBAB Swedish Government Nordbanken Hypotek Sandvik Vasakronan General Electric Stadshypotek Nordea Hypotek SEB Boln Spintab Swedish Government Nordea Hypotek Stadshypotek Swedbank Hypotek Swedbank Swedish Covered Stadshypotek Volvo Landshypotek SBAB Spintab Stadshypotek SEB Swedbank Hypotek Nordea Hypotek Swedish Covered Share 13% 10% 7% 6% 5% 23% 12% 9% 6% 6% 16% 6% 4% 3% 3% 15% 14% 9% 8% 6% 24% 14% 9% 6% 6% 10% 8% 7% 7% 6% 21% 20% 8% 8% 5% December 2008 Issuer Stadshypotek Landshypotek Swedbank Nordea DnB Nor Stadshypotek Swedish Government SEB Nordea Hypotek Swedbank Hypotek Swedish Government Stadshypotek Cash Swedish Covered Lnsfrskringar Nordea Hypotek Stadshypotek Spintab SBAB SEB Boln Stadshypotek Swedbank hypotek SEB Landshypotek Swedish Government SBAB Stadshypotek Swedbank SEB Landshypotek Stadshypotek Swedbank Hypotek Nordea Hypotek SBAB Swedish Government Share 9% 8% 6% 6% 6% 23% 15% 8% 7% 6% 13% 10% 6% 6% 5% 22% 17% 11% 9% 5% 24% 13% 8% 8% 7% 10% 10% 8% 8% 7% 22% 15% 8% 5% 3% June 2009 Issuer Stadshypotek SBAB Landshypotek Swedbank Nordea Hypotek Stadshypotek Nordea Hypotek Swedbank Hypotek SBAB Landshypotek Lnsforsk.Hypotek Swedish Covered Handelsbanken Landshypotek Volvo Stadshypotek Nordea Hypotek Swedish Government Lnsforsk. Hypotek SBAB Stadshypotek Swedbank hypotek Nordea hypotek Landshypotek SBAB Stadshypotek SBAB Swedbank Landshypotek Lnsfrsk. Hypotek Stadshypotek Swedbank Hypotek Nordea Hypotek SBAB Cash Share 11% 9% 8% 7% 6% 24% 15% 15% 9% 7% 8% 5% 4% 4% 4% 24% 15% 9% 8% 7% 23% 15% 14% 7% 7% 14% 12% 9% 7% 6% 23% 12% 8% 7% 5%

Swedbank Likviditetsfond

SHB Lux. Kortrntefond

Nordea Inst. Penn.

Swedbank Penn.

Nordea Likv.

SEB Penn.

Table A1 Largest exposures by issuer


31 December 2007 SEB Penningm SEB Penn. SHB Kortrnte Nordea Sekura Nordea Likviditet. Nordea Instit. Penn. Swedbank Sv. Likvid. Swedbank Penn. 16% 68% 41% 61% 20% 18% SEB Penningm SEB Penn. SHB Kortrnte Nordea Sekura Nordea Likviditet. Nordea Instit. Penn. Swedbank Sv. Likvid. Swedbank Penn. 10% 31% 25% 18% 32% 29% SEB Penningm SEB Penn. SHB Kortrnte Nordea Sekura Nordea Likviditet. Nordea Instit. Penn. Swedbank Sv. Likvid. Swedbank Penn. 15% 20% 23% 36% 26% 23% SHB Kortrnte 63% 75% 31% 68% 38% 38% SHB Kortrnte 16% 10% 11% 7% 11% 9% SHB Kortrnte 26% 10% 8% 16% 10% 7% Nordea Sekura 70% 37% 84% 67% 98% 83% Nordea Sekura 38% 11% 84% 22% 34% 32% 30 June 2009 Nordea Sekura 54% 9% 83% 70% 37% 31% Nordea Likviditet 52% 9% 68% 60% 35% 30% Nordea Instit Penn 50% 9% 49% 48% 35% 29% Swedbank Sv Likvid 38% 4% 28% 27% 41% 86% Swedbank Penningm 41% 4% 30% 27% 41% 98% Nordea Likviditet 49% 18% 91% 59% 17% 17% 31 December 2008 Nordea Likviditet 36% 10% 67% 19% 34% 32% Nordea Instit Penn 53% 5% 32% 33% 29% 31% Swedbank Sv Likvid 34% 16% 24% 20% 16% 75% Swedbank Penningm 38% 21% 28% 26% 19% 83% Nordea Instit Penn 65% 20% 51% 34% 15% 15% Swedbank Sv Likvid 13% 29% 86% 17% 25% 82% Swedbank Penningm 12% 27% 82% 17% 25% 86%

34

Table A2 Cross-investments

PART 1

Interest Rates After the Credit Crunch: Markets and Models Evolution
Marco Bianchetti Market Risk Management, Intesa San Paolo Mattia Carlicchi Market Risk Management, Intesa San Paolo1

Abstract
We present a quantitative study of the evolution of markets and models during the recent crisis. In particular, we focus on the fixed income market and we analyze the most relevant empirical evidence regarding the divergence between Libor and OIS rates, the explosion of basis swaps spreads, and the diffusion of collateral agreements and CSA-discounting, in terms of credit and liquidity effects. We also review the new modern pricing approach prevailing among practitioners, based on multiple yield curves reflecting the different credit and liquidity risk of Libor rates with different tenors and the overnight discounting of cash flows originated by derivative transactions under collateral with daily margination. We report the classical and modern no-arbitrage pricing formulas for plain vanilla interest rate derivatives, and the multiple-curve generalization of the market standard SABR model with stochastic volatility. We then report the results of an empirical analysis on recent market data comparing
1 The authors gratefully acknowledge fruitful interactions with A. Battauz, A. Castagna, C. C. Duminuco, F. Mercurio, M. Morini, M. Trapletti, and colleagues at Market Risk Management and Fixed Income trading desks.

pre- and post-credit crunch pricing methodologies and showing the transition of the market practice from the classical to the modern framework. In particular, we prove that the market of interest rate swaps has abandoned, since March 2010, the classical single-curve pricing approach, typical of the pre-credit crunch interest rate world, and has adopted the modern multiple-curve CSA approach, thus incorporating credit and liquidity effects into market prices. The same analysis is applied to European caps/floors, finding that the full transition to the modern multiple-curve CSA approach has been deferred until August 2010. Finally, we show the robustness of the SABR model to calibrate the market volatility smile coherently with the new market evidences.

35

The financial crisis that began in the second half of 2007 has triggered, among its many other implications, a deep evolution phase of the classical framework adopted for trading derivatives. In particular, credit and liquidity issues were found to have macroscopical impacts on the prices of financial instruments, both plain vanillas and exotics. These are clearly visible in the market quotes of plain vanilla interest rate derivatives, such as deposits, forward rate agreements (FRA), swaps (IRS), and options (caps, floors, and swaptions). Since August 2007 the primary interest rates of the interbank market, such as Libor, Euribor, Eonia, and Federal Funds rate2, display large basis spreads that have reached up to 200 basis points. Similar divergences are also found between FRA rates and the forward rates implied by two consecutive deposits, and similarly, among swap rates with different floating leg tenors. Recently, the market has also included the effect of collateral agreements widely diffused among derivatives counterparties in the interbank market. After the market evolution the standard no-arbitrage framework adapted to price derivatives, developed over forty years following the Copernican revolution of Black and Scholes (1973) and Merton (1973), became obsolete. Familiar relations described in standard textbooks [see, for example, Brigo and Mercurio (2006), Hull (2010)], such as the basic definition of forward interest rates, or the swap pricing formula, had to be abandoned. Also the fundamental idea of the construction of a single risk free yield curve, reflecting at the same time the present cost of funding of future cash flows and the level of forward rates, has been ruled out. The financial community has thus been forced to start the development of a new theoretical framework, including a larger set of relevant risk factors, and to review from scratch the no-arbitrage models used on the market for derivatives pricing and risk analysis. We refer to such old and new frameworks as classical and modern, respectively, to remark the shift of paradigm induced by the crisis. The topics discussed in this paper sit at the heart of the present derivatives market, with many consequences for trading, financial control, risk management, and IT, and are attracting a growing attention in the financial literature. To our knowledge, they have been approached by Kijima et al. (2008), Chibane and Sheldon (2009), Ametrano and Bianchetti (2009), Ametrano (2011), Fujii et al. (2009a, 2010a, 2011) in terms of multiplecurves; by Henrard (2007, 2009) and Fries (2010) using a first-principles approach; by Bianchetti (2010) using a foreign currency approach; by Fujii et al. (2009b), Mercurio (2009, 2010a, 2010b) and Amin (2010) within the Libor Market Model; by Pallavicini and Tarenghi (2010) and Moreni and Pallavicini (2010) within the HJM model; by Kenyon (2010) using a short rate model; by Morini (2009) in terms of counterparty risk; by Burghard and Kjaer (2010), Piterbarg (2010a, 2010b), Fujii et al. (2010b), Morini and Prampolini (2010) in terms of cost of funding.

Market evolution
In this section we discuss the most important market data showing the main consequences of the credit crunch, which started in August 2007. We will focus, in particular, on euro interest rates, since they show rather peculiar and persistent effects that have strong impacts on pricing methodologies. The same results hold for other currencies, USDLibor and Federal Funds rates in particular [Mercurio (2009, 2010b)].

Euribor OIS basis


Figure 1 reports the historical series of the Euribor deposit six-month (6M) rate, of the Eonia overnight indexed swap3 (OIS) six-month (6M) rate, and the iTtraxx financial senior CDS 5Y index value for the European market over the time interval Jan. 06 Dec. 10. Before August 2007, the two interbank rates (Euribor and Eonia) display strictly overlapping trends differing by no more than 6 bps. In August 2007, we observe a sudden increase of the Euribor rate and a simultaneous decrease of the OIS rate that lead to the explosion of the corresponding basis spread, touching the peak of 222 bps in October 2008, when Lehman Brothers filed for bankruptcy protection. Successively the basis has sensibly reduced and stabilized between 40 bps and 60 bps. Notice that the pre-crisis level has never been recovered. The same effect is observed for other similar couples, such as Euribor 3M versus OIS 3M. The abrupt divergence between the Euribor and OIS rates can be explained by considering both the monetary policy decisions adopted by international authorities in response to the financial turmoil and the impact of the credit crunch on the credit and liquidity risk perception of the market, as well as the different financial meaning and dynamics of these rates.

The Euribor rate is the reference rate for over-the-counter (OTC) transactions in the Euro Area. It is defined as the rate at which euro interbank deposits are being offered within the EMU zone by one prime bank to another at 11:00 a.m. Brussels time. The rate fixings for a strip of 15 maturities, ranging from one day to one year, are constructed as the trimmed average of the rates submitted (excluding the highest and lowest 15 % tails) by a panel of banks. The contribution panel is composed, as of September 2010, of 42 banks, selected among the E.U. banks with the highest volume of business in the Eurozone money markets, plus some large international banks from non-E.U. countries with important Eurozone operations. Thus, Euribor

36

Libor, sponsored by the British Banking Association (BBA), is quoted in all the major currencies and is the reference rate for international over-the-counter (OTC) transactions (see www.bbalibor.com). Euribor and Eonia, sponsored by the European Banking Federation (EBF), are the reference rates for OTC transactions in the euro market (see www.euribor.org). The Federal Funds rate is a primary rate of the USD market and is set by the Federal Open Market Committee (FOMC) according to the monetary policy decisions of the Federal Reserve (FED). The overnight index swap (OIS) is a swap with a fixed leg versus a floating leg indexed to the overnight rate. The euro market quotes a standard OIS strip indexed to Eonia rate (daily compounded) up to 30 years maturity.

The Capco Institute Journal of Financial Transformation


Interest Rates After the Credit Crunch: Markets and Models Evolution

rates reflect the average cost of funding of banks in the interbank market at each given maturity. During the crisis, the solvency and solidity of the entire financial sector was brought into question and the credit and liquidity risk and premia associated with interbank counterparties increased sharply. The Euribor rates immediately reflected these dynamics and raise to their highest values for more than a decade. As seen in Figure 1, the Euribor 6M rate suddenly increased on August 2007 and reached 5.49 percent on 10th October 2008.

Rate (%) 7% 6% 5% 4% 3% 2% 1% 0% 2/01/06 2/07/06 2/01/07 2/07/07 2/01/08 2/07/08 2/01/09 2/07/09 2/01/10 2/07/10

Spread (bps) Euribor deposit 6M Eonia OIS 6M iTraxx Fin. CDS 5Y Ind. Euribor deposit 6M Eonia OIS 6M spread 250 200 150 100 50 0 -50

The Eonia rate is the reference rate for overnight OTC transactions in the Euro Area. It is constructed as the average rate of the overnight transactions (one day maturity deposits) executed during a given business day by a panel of banks on the interbank money market, weighted with the corresponding transaction volumes. The Eonia contribution panel coincides with the Euribor contribution panel. Thus the Eonia rate includes information about the short-term (overnight) liquidity expectations of banks in the euro money markets. It is also used by the European Central Bank (ECB) as a method of effecting and observing the transmission of its monetary policy actions. During the crisis, the central banks were mainly concerned with stabilizing the level of liquidity in the market, thus they reduced the level of the official rates: the deposit facility rate and the marginal lending facility rate. Empirical evidence suggests that the Eonia rate is always higher than the deposit facility rate and lower than the marginal lending facility rate, defining the so-called rates corridor. Furthermore, the daily tenor of the Eonia rate makes the credit and liquidity risks reflected in it negligible. For this reason the OIS rates are considered the best proxies available in the market for the risk-free rate.

The spread between Euribor deposit 6M and Eonia OIS 6M is shown on the right axis (Jan. 06 Dec. 10 window, source: Bloomberg)

Figure 1 Historical series of Euribor deposit 6M rate, Eonia OIS 6M rate, and iTraxx senior financial CDS 5Y index value for the European market.

the market due to excessive bid-offer spreads (market liquidity risk), and when it is difficult to borrow funds on the market due to excessive funding cost (systemic liquidity risk). According to Morini (2009), these three elements are, in principle, not a problem so long as they do not appear together, because a bank with, for instance, problem 1 and 2 (or 3) will be able to finance itself by borrowing funds (or liquidating assets) on the market. During the crisis these three scenarios manifested themselves jointly at the same time, thus generating a systemic lack of liquidity [Michaud and Upper (2008)]. Clearly, it is difficult to disentangle liquidity and credit risk components in the Euribor and Eonia rates, because, in particular, they do not refer to the default risk of one counterparty in a single derivative deal but to a money market with bilateral credit risk [Morini (2009)]. Finally, we stress that, as seen in Figure 1, the Libor-OIS basis is still persistent today at a non-negligible level, despite the lower rate and higher liquidity regime reached after the most acute phase of the crisis and the strong interventions of central banks and governments. Clearly the market has learnt the lesson of the crisis and has not forgotten that these interest rates are driven by different credit and liquidity dynamics. From an historical point of view, we can compare this effect to the appearance of the volatility smile on the option markets after the 1987 crash [Derman and Kani (1994)]. It is still there.

Thus the Euribor-OIS basis explosion of August 2007 plotted in Figure 1 is essentially a consequence of the different credit and liquidity risk reflected by Euribor and Eonia rates. We stress that such a divergence is not a consequence of the counterparty risk carried by the financial contracts, deposits and OISs, exchanged in the interbank market by risky counterparties, but depends on the different fixing levels of the underlying Euribor and Eonia rates. The different influences of credit risk on Libor and overnight rates can also be appreciated in Figure 1 by comparing the historical series for the Euribor-OIS spread with the iTraxx financial senior CDS 5Y index value for the European market, which represents the level of the CDS spread related to a generic European primary bank. We observe that the Euribor-OIS basis explosion of August 2007 matches the CDS explosion, corresponding to the generalized increase of the default risk seen in the interbank market. The liquidity risk component in Euribor and Eonia interbank rates is distinct but strongly correlated to the credit risk component. According to Acerbi and Scandolo (2007), liquidity risk may appear in at least three circumstances: when there is a lack of liquidity to cover short term-debt obligations (funding liquidity risk), when it is difficult to liquidate assets on

FRA rates versus forward rates


The considerations above, with reference to spot Euribor and Eonia rates underlying deposit and OIS contracts, also apply to forward rates and FRA rates. The FRA 3x6 rate is the equilibrium (fair) rate of a FRA contract starting at the spot date (today plus two working days in the euro market), maturing in six months, with a floating leg indexed to the forward interest rate between three and six months, versus a fixed interest rate leg. The paths 37

of market FRA rates and of the corresponding forward rates implied in two consecutive Eonia OIS deposits are similar to those observed in Figure 1 for the Euribor deposit and Eonia OIS respectively. In particular, a sudden divergence between the quoted FRA rates and the implied forward rates arose in August 2007, regardless of the maturity, and reached its peak in October 2008 with the Lehman crash. Mercurio (2009) has proven that the effects above may be explained within a simple credit model with a default-free zero coupon bond and a risky zero coupon bond issued by a defaultable counterparty with recovery rate R. The associated risk free and risky Libor rates are the underlyings of the corresponding risk free and risky FRAs.

6M floating leg at T0 choose a counterparty C1 with a high credit standing (that is, belonging to the Euribor contribution panel) with collateral agreement in place, and lend the notional for six months at the Euribor 6M rate prevailing at T0 (Euribor 6M flat because C1 is an Euribor counterparty). At maturity T2 recover notional plus interest from C1. Notice that if counterparty C1 defaults within six months we gain full recovery thanks to the collateral agreement.

3M+3M floating leg at T0 choose a counterparty C1 with a high credit standing (belonging to the Euribor contribution panel) with collateral agreement in place, and lend the notional for three months at the Euribor 3M rate (flat) prevailing at T0. At T1 recover notional plus interest and check the credit standing of C1: if C1 has maintained its credit standing (it still belongs to the Euribor contribution panel), then lend the money again to C1 for three months at the Euribor 3M rate (flat) prevailing at T1, otherwise choose another counterparty C2 belonging to the Euribor panel with collateral agreement in place, and lend the money to C2 at the same interest rate. At maturity T2 recover notional plus interest from C1 or C2. Again, if counterparties C1 or C2 default within six months we gain full recovery thanks to the collateral agreements.

Basis swaps
A third evidence of the regime change after the credit crunch is the explosion of the basis swaps spreads. Basis swaps are quoted on the euro interbank market in terms of the difference between the fixed equilibrium swap rates of two swaps. For example, the quoted Euribor 3M versus Euribor 6M basis swap rate is the difference between the fixed rates of a first standard swap with a Euribor 3M floating leg (quarterly frequency) versus a fixed leg (annual frequency), and of a second swap with a Euribor 6M floating leg (semi-annual frequency) versus a fixed leg (annual frequency). The frequency of the floating legs is the tenor of the corresponding Euribor rates. The Eonia rate has the shortest tenor (one day). The basis swap spreads were negligible (or even not quoted) before the crisis. They suddenly diverged in August 2007 and peaked in October 2008 with the Lehman crash. The basis swap involves a sequence of spot and forward rates carrying the credit and liquidity risk discussed above. Hence, the basis spread explosion can be interpreted, in principle, in terms of the different credit and liquidity risks carried by the underlying Libor rates with different tenor. From the findings described above we understand that, after the crisis, market players have a preference for receiving floating payments with higher frequency (i.e., 3M) indexed to lower tenor Euribor rates (i.e., Euribor 3M), with respect to floating payments with lower frequency (i.e., 6M) indexed to higher tenor Euribor rates (i.e., Euribor 6M), and are keen to pay a premium for the difference. Hence in a basis swap (i.e., 3M versus 6M), the floating leg indexed to the higher rate tenor (6M) must include a risk premium higher than that included in the floating leg indexed to the shorter rate tenor (3M, both with the same maturity). Thus, a positive spread emerges between the two corresponding equilibrium rates (or, in other words, a positive spread must be added to the 3M floating leg to equate the value of the 6M floating leg). According to Morini (2009), a basis swap between two interbank counterparties under collateral agreement can be described as the difference between two investment strategies. Fixing, for instance, a basis swap Euribor 3M versus Euribor 6M with 6M maturity, scheduled on three dates 38 T0, T1=T0+3M, T2=T0+6M, we have the following two strategies:

Clearly, the 3M+3M leg implicitly embeds a bias towards the group of banks with the best credit standing, typically those belonging to the Euribor contribution panel. Hence the counterparty risk carried by the 3M+3M leg must be lower than that carried by the 6M leg. In other words, the expectation of the survival probability of the borrower of the 3M leg in the second 3M-6M period is higher than the survival probability of the borrower of the 6M leg in the same period. This lower risk is embedded into lower Euribor 3M rates with respect to Euribor 6M rates. But with collateralization the two legs have both null counterparty risk. Thus a positive spread must be added to the 3M+3M leg to reach equilibrium. The same discussion can be repeated, mutatis mutandis, in terms of liquidity risk. We stress that the credit and liquidity risk involved here are those carried by the risky Libor rates underlying the basis swap, reflecting the average default and liquidity risk of the interbank money market (of the Libor panel banks), not those associated to the specific counterparties involved in the financial contract. We stress also that such effects were already present before the credit crunch, as discussed in Tuckman and Porfirio (2004), and well known to market players, but not effective due to negligible basis spreads.

Collateralization and OIS-discounting


Another effect of the credit crunch has been the great diffusion of collateral agreements to reduce the counterparty risk of OTC derivatives positions. Nowadays most of the counterparties on the interbank market have mutual collateral agreements in place. In 2010, more than 70 percent of all OTC derivatives transactions were collateralized [ISDA (2010)]. Typical financial transactions generate streams of future cash flows, whose

The Capco Institute Journal of Financial Transformation


Interest Rates After the Credit Crunch: Markets and Models Evolution

total net present value (NPV = algebraic sum of all discounted expected cash flows) implies a credit exposure between the two counterparties. If, for counterparty A, NPV(A)>0, then counterparty A expects to receive, on average, future cash flows from counterparty B (in other words, A has a credit with B). On the other hand, if counterparty B has NPV(B)<0, then it expects to pay, on average, future cash flows to counterparty A (in other words, B has a debt with A). The reverse holds if NPV(A)<0 and NPV(B)>0. Such credit exposure can be mitigated through a guarantee, called collateral agreement, or credit support annex (CSA), following the International Swaps and Derivatives Association (ISDA) standards widely used to regulate OTC transactions. The main feature of the CSA is a margination mechanism similar to those adopted by central clearing houses for standard instruments exchanges (i.e., futures). In a nutshell, at every margination date the two counterparties check the value of the portfolio of mutual OTC transactions and regulate the margin, adding to or subtracting from the collateral account the corresponding mark to market variation with respect to the preceding margination date. The margination can be regulated with cash or with (primary) assets of corresponding value. In any case the collateral account holds, at each date, the total NPV of the portfolio, which is positive for the creditor counterparty and negative for the debtor counterparty. The collateral amount is available to the creditor. On the other side, the debtor receives an interest on the collateral amount, called collateral rate. Hence, we can see the collateral mechanism as a funding mechanism, transferring liquidity from the debtor to the creditor. The main difference with traditional funding through deposit contracts are that, using derivatives, we have longer maturities and stochastic lending/borrowing side and amount. We can also look at CSA as a hedging mechanism, where the collateral amount hedges the creditor against the event of default of the debtor. The most diffused CSA provides a daily margination mechanism and an overnight collateral rate [ISDA (2010)]. Actual CSAs provide many other detailed features that are beyond the scope of this paper. Thus, a first important consequence of the diffusion of collateral agreements among interbank counterparties is that we can consider the derivatives prices quoted on the interbank market as counterparty riskfree OTC transactions. A second important consequence is that, by no-arbitrage, the CSA margination rate and the discounting rate of future cash flows must match. Hence the name of CSA discounting. In particular, the most diffused overnight CSA implies overnight-based discounting and the construction of a discounting yield curve that must reflect, for each maturity, the funding level in an overnight collateralized interbank market. Thus overnight indexed swaps (OIS) are the natural instruments for discounting curve construction. Hence the alternative name of OIS discounting or OIS (yield) curve. Such discounting curve is also the best available proxy of a riskfree yield curve. In the case of absence of CSA, using the same no-arbitrage principle

between the funding and the discounting rate, we conclude that a bank should discount future cash flows (positive or negative) using its own traditional cost of funding term structure. This has important (and rather involved) consequences, such that, according to Morini and Prampolini (2009), each counterparty assigns a different present value to the same future cash flow, breaking the fair value symmetry; that a worsening of its credit standing allows the bank to sell derivatives (options in particular) at more competitive prices (the lower the rate, the higher the discount, the lower the price); as well as the problem of double counting the debt value adjustment (DVA) to the fair value. Presently, the market is in the middle of a transition phase from the classical Libor-based discounting methodology to the modern CSA-based methodology. OTC transactions executed on the interbank market normally use CSA discounting. In particular, plain vanilla interest rate derivatives, such as FRA, swaps, basis swaps, caps/floor/swaptions are quoted by the main brokers using CSA discounting [ICAP (2010)]. However, presently just a few banks have declared full adoption of CSA discounting also for balance sheet revaluation and collateral margination [Bianchetti (2011)]. Finally, we stress that before the crisis the old-style standard Libor curve was representative of the average funding level on the interbank market [Hull (2010). Such curve, even if considered a good proxy for a riskfree curve, thanks to the perceived low counterparty risk of primary banks (belonging to the Libor contribution panel), was not strictly riskfree because of the absence of collateralization.

Modeling evolution
According to Bianchetti and Morini (2010), the market frictions discussed above have created a sort of segmentation of the interest rate market into sub-areas, mainly corresponding to instruments with 1M, 3M, 6M, 12M underlying rate tenors. These are characterized, in principle, by different internal dynamics, liquidity, and credit risk premia, reflecting the different views and interests of the market players. In response to the crisis, the classical pricing framework, based on a single yield curve used to calculate forward rates and discount factors, has been abandoned, and a new modern pricing approach has prevailed among practitioners. The new methodology takes into account the market segmentation as an empirical evidence and incorporates the new interest rate dynamics into a multiple curve framework as follows:

Discounting curves these are the yield curves used to discount futures cash flows. As discussed above, the curve must be constructed and selected such that it reflects the cost of funding of the bank in connection with the actual nature of the specific contract that generates the cash flows. In particular, an OIS-based curve is used to discount cash flow generated by a contract under CSA with daily margination and overnight collateral rate; a funding curve is used in 39

case of contracts without CSA; and in the case of non-standard CSA (i.e., different margination frequency, rate, threshold, etc.), appropriate curves should be, in principle, selected (we will not discuss this topic here since it applies to a minority of deals and it would be beyond of the scope of the present paper). We stress that the funding curve for no-CSA contracts is specific to each counterparty, that will have its specific funding curve. This modern discounting methodology is called CSA-discounting.

stress that the fundamental quantity of the modern pricing framework is the FRA rate F. Indeed, following Mercurio (2009, 2010a, 2010b), the correct probability measure to be used in expectations is that associated with the discounting curve Cd, under which the forward rate is no longer a martingale. Instead, the FRA rate, by definition, is a martingale under such measure.

Empirical pricing analysis


In the following sections we present the results of an empirical analysis comparing the results of the three pricing frameworks described above against market quotations of plain vanilla interest rate derivatives at two different valuation dates. The aim of this analysis is to highlight the time evolution of the market pricing approach as a consequence of the financial crisis.

Forwarding curves these are the yield curves used to compute forward rates. The curve must be constructed and selected according to the tenor and typology of the rate underlying the actual contract to be priced. For example, a swap floating leg indexed to Euribor 6M requires a Euribor 6M forwarding curve constructed from quoted instruments with Euribor 6M underlying rate.

Following Bianchetti (2010), we report in Table 1 the comparison between the classical and the modern frameworks, called single-curve and multiple-curve approach, respectively. The adoption of the multiple-curve approach has led to the revision of no-arbitrage pricing formulas. According to Mercurio (2009, 2010a, 2010b), we compare in Table 2 the classical and modern pricing formulas for plain vanilla interest rate derivatives. We

Market data, volatility surfaces and yield curves


The reference market quotes that we considered are, in particular, euro forward start interest rate swap contracts (FSIRS): market swap rates based on Euribor 6M, published by Reuters, and euro cap/floor European options: market premia and Black implied volatility surfaces based on Euribor 6M, published by Reuters.

Classical methodology (single-curve) Yield curves construction Select a single finite set of the most convenient (i.e., liquid) vanilla interest rate market instruments and build a single yield curve C using the preferred bootstrapping procedure. For example, a common choice in the European market is a combination of short-term EUR deposits, medium-term Futures/FRAs on Euribor 3M, and medium-long-term swaps on Euribor 6M.

Modern methodology (multiple-curve) Build one discounting curve Cd using the preferred selection of vanilla interest rate market instruments and bootstrapping procedure. Build multiple distinct forwarding curves Cx using the preferred selections of distinct sets of vanilla interest rate market instruments, each homogeneous in the underlying rate tenor (typically x = 1M, 3M, 6M, 12M) and bootstrapping procedures. For example, for the construction of the forwarding curve C6M only market instruments with six-month tenor are considered. For each interest rate coupon compute the relevant FRA rate F x,k (t) with tenor x using the corresponding forwarding curve Cx and applying the following formula, with tTk-1Tk:

Computation of expected cash flows

For each interest rate coupon compute the relevant forward rates using the given yield curve C and applying the standard formula, with tTk-1Tk:

where k is the year fraction related to the time interval [Tk-1,Tk]. Compute cash flows as expectations at time t of the corresponding coupon payoffs with respect to the Tk-forward measure QTk , associated to the numeraire P(t,Tk ) from the same yield curve C:

where x,k is the year fraction associated to the time interval [Tk-1,Tk]. Compute cash flows as expectations at time t of the corresponding coupon T payoffs with respect to the discounting Tk-forward measure Qd k, associated to the numeraire Pd (t,Tk ) from the discounting curve Cd:

Computation of discount factors Computation of the derivatives price

Compute the relevant discount factors P(t,Tk ) from the unique yield curve C defined in step 1. Compute the derivatives price at time t as the sum of the discounted expected future cash flows:

Compute the relevant discount factors Pd (t,Tk ) from the discounting curve Cd of step 1. Compute the derivatives price at time t as the sum of the discounted expected future cash flows:

We refer to a general single-currency interest rate derivative under CSA characterized by m future coupons with payoffs = {1, , m}, generating m cash flows c = {c1, , cm} at future dates T = {T1, , Tm} with t <T1 < <Tm.

40

Table 1 Comparison table between the classical single-curve methodology and the modern multiple-curve methodology

The Capco Institute Journal of Financial Transformation


Interest Rates After the Credit Crunch: Markets and Models Evolution

Classical approach (single-curve)

Modern approach (multiple-curve)

FRA

with

with

Swap

with

with

Basis swap

with

with

Cap/floor

with

with

Table 2 Comparison table of classical and modern formulas for pricing plain vanilla derivatives. In red we emphasize the most relevant peculiarities of the multiple-curve method

All the market data were archived at close of business (17:30 CET) on 31st March and 31st August 2010, and refer to instruments traded among collateralized counterparties in the European interbank market. Moreover, for the pricing analysis we defined the following yield curves:

curve usually presents small, but not negligible, differences (2 bps) with respect to the Euribor 6M Standard.

Pricing methodologies
We have tested three different pricing methodologies as described below.

Euribor standard the classical yield curve bootstrapped from shortterm euro deposits (from 1D to the first Futures), mid-term futures on Euribor 3M (below 3Y) and mid-long term swaps on Euribor 6M (after 3Y) (Table 1, left column).

Single-curve approach we use the Euribor standard yield curve to calculate both the discount factors P(t, Tk) and the forward rates F needed for pricing any interest rate derivatives. This is the classical single-curve methodology adopted by the market before the credit crunch, without collateral, credit, and liquidity effects.

Euribor 6M Standard the pure Euribor 6M forwarding yield curve is bootstrapped from the euro deposit 6M, mid-term FRAs on Euribor 6M (below 3Y), and mid-long term swaps on Euribor 6M (after 3Y). The discounting curve used in the bootstrapping procedure is the Euribor Standard. This curve differs from the one above on the short- to midterm, while after the three-year node it tends to coincide with the Euribor standard since the bootstrapping instruments are the same.

Multiple-curve no-CSA approach we calculate discount factors Pd(t, Tk) on the Euribor standard curve and FRA rates Fx,k (t) on the Euribor 6M standard curve. This is the quick and dirty methodology adopted by the market in response to the credit crunch after August 2007, which distinguishes between discounting and forwarding curves. It is defined no-CSA because it does not include the effect of collateral. Indeed, the Euribor standard discounting curve reflects the average cost of (uncollateralized) funding of a generic European interbank counterparty (belonging to the Euribor panel). Also the Euribor 6M standard forwarding curve construction does not take into account collateralization, but it does include the tenor-specific credit and liquidity risk of the underlying Euribor 6M rate. 41

Eonia OIS the modern discounting yield curve bootstrapped from quoted Eonia OIS. This yield curve usually presents lower rates and, hence, greater discount factors than those obtained with the Euribor standard above.

Euribor 6M CSA the modern Euribor 6M forwarding yield curve bootstrapped from the same market instruments as the Euribor 6M standard curve, but using the Eonia OIS as discounting curve. This

Forward start interest rate swaps differences 31st March 2010 Range Single-curve Multiple-curve no-CSA Multiple-curve CSA [-18.4;+20.8] [-2.9;+3.1] [-2.9;+2.3] [-3.2;+2.7] [-2.9;+2.6] [-1.0;+1.5] Standard deviation 2.84 1.77 0.53 1.89 1.86 0.37 [-16.3;+24.4] [-5.7;+2.9] [-4.1;+2.4] Range [-3.9;+1.9] [-3.7;+1.7] [-1.4;+1.0] 31st August 2010 Standard deviation 2.58 1.11 0.47 1.15 1.09 0.26

Table 3 FSIRS differences (in basis points) on 31st March and 31st August 2010

Multiple-curve CSA approach we calculate discount factors Pd(t, Tk) on the Eonia OIS curve and FRA rates Fx,k (t) on the Euribor 6M CSA curve. This is the state of the art modern methodology, fully coherent with the CSA nature of the interest rate derivatives considered and with the credit and liquidity risk of the underlying Euribor 6M rate.

years. This is expected, because the two curves used are very similar after three years, both using standard euro swaps on Euribor 6M. The third methodology is by far the best in reproducing the market data. The remaining differences of around 1 basis points may be explained with minor differences with respect to the market yield curves. The same observations apply to the results of 31st August 2010. We conclude that the market of interest rate swaps since March 2010 has abandoned the classical single-curve pricing methodology, typical of the pre-credit crunch interest rate world, and has adopted the modern multiple-curve CSA approach, thus incorporating into market prices the credit and liquidity effects described above.

The arbitrage-free formulas used in the analysis are those reported in Table 2: the single-curve approach is in the left column and the two multiple-curve approaches are on the right column. The two following sections report the findings of the analysis.

Forward start interest rate swaps


The forward start interest rate swap contracts considered here are characterized by a floating leg on Euribor 6M with six-month frequency versus a fixed leg with annual frequency, a forward start date and maturity dates ranging from 1 to 25 years. We selected forward start instead of spot start swaps because the former are more sensible to the choice of the pricing methodology. For each methodology and each valuation date (31st March and 31st August 2010) we computed the theoretical equilibrium FSIRS rates and we compared them with the market quotes. In Figure 2 below (left side graphs) we report the result for the valuation date 31st March 2010, while in Table 3 we compare the most important numbers: the range of minimum and maximum discrepancies and the standard deviation. The spikes observed in the graphs for start dates below 3Y may be explained in terms of differences in the short-term yield curve construction, where there is a significant degree of freedom in choosing the bootstrapping instruments (deposits, FRAs, and futures). Smaller spikes are also present for short tenor FSIRS with maturities below 3Y because these swaps depend on a few forwards and discounts and, thus, are more sensitive to minor differences in the yield curves. Hence, in Table 3 we present the result including and excluding the two stripes below 2 years start/maturity date and considering both the valuation dates (31st March and 31st August 2010). From Figure 2 we observe that the first methodology has the worst performance, producing, on average, overestimated FSIRS rates. The sec42 ond methodology introduces small improvements, at least below three The computation of the theoretical European cap/floor premia using the standard Blacks formula in Table 2 requires two inputs: the pair of discounting and forwarding curves and the Black implied term volatility [Mercurio (2009, 2010b)]. Even if the market-driven quantity is the premium of the traded option, it is standard market convention to quote the option in terms of its Black implied term volatility. Clearly, once the premium is fixed by the market supply and demand, the value of this volatility depends on the curves used for discounting and forwarding. Thus, a change in the market yield curves implies a corresponding change in the Black implied volatilities. Actually, in August 2010 the market began to quote two distinct volatility surfaces: one implied using the classical Euribor discounting curve and one implied using the modern Eonia discounting curve [ICAP (2010)]. The Eonia implied volatility is generally lower than the Euribor implied volatility, since the effect of lower Eonia rates

Cap/floor options
The European cap/floor options considered here are characterized by floating payments with 6M frequency indexed to Euribor 6M, spot start date, maturity dates ranging from 3 to 30 years, and strikes ranging from 1 percent to 10 percent. The first caplet/floorlet, already known at spot date, is not included in the cap/floor premium. The market quotes floor premia for strikes below the at-the-money (ATM) and cap premia for strikes above ATM. For each methodology and each valuation date (31st March and 31st August 2010) we computed the theoretical cap/floor premia and we compared them with the market premia.

The Capco Institute Journal of Financial Transformation


Interest Rates After the Credit Crunch: Markets and Models Evolution

FSIRS rates differences: Single-curve versus market (31 Mar 2010)

Cap/floor premia differences: single-curve versus market (31 Aug 2010)

25 25 20 20 15 25 15 10 20 10 5 15 5 0 10 0 -5 5 -5 -10 0 -10 -15 -5 -15 -20 -10 -20 -15 -20 1Y Difference (bps) (bps) Difference Difference (bps) Difference (bps) (bps) Difference Difference (bps)

20 20 15 15 20 10 10 15 5 5 10 0 0 5 -5 -5 0 -10 10,0% 10,0% 10,0% 3Y 3Y -10 -5 -10 3Y 6,0% 6,0% 6,0% 5Y 5Y 4,5% 4,5% 4,5% 7Y 7Y 3,5% 3,5% 3,5% 9Y 9Y

25Y25Y 25Y

13Y13Y 13Y

FSIRS rates differences: multiple-curve no-CSA versus market (31 Mar 2010) Maturity Forward start

1Y 1Y

9Y 9Y

3Y 3Y 3Y

5Y 5Y 5Y

2,5% 2,5% 2,5%

12Y12Y 12Y

Maturity Maturity 7Y

7Y 7Y

9Y 9Y 9Y

5Y 5Y

2,0% 2,0% 2,0%

20Y20Y 20Y

11Y11Y 11Y

13Y13Y 13Y

1,0% 1,0% 1,0% 30Y30Y 30Y

15Y15Y 15Y

1Y 1Y 1Y 25Y25Y 25Y

5Y

Forward start 9Y

Maturity Maturity 7Y

Strike Strike

9Y

Forward start

5Y

Cap/Floor premia differences: multiple-curve no-CSA versus market (31 Aug 2010) Maturity Strike

4 4 Difference (bps) (bps) Difference Difference (bps) 2 4 2 0 2 0 -2 0 -2 -4 -2 -4 -6 -4 -6 -6 1Y 4 2 4 2 0 2 0 -2 0 -2 -4 -2 -4 -6 -4 -6 -6 1Y 13Y13Y 13Y 1Y 1Y 3Y 3Y 9Y 9Y 5Y 5Y 25Y25Y 25Y 13Y13Y 13Y 1Y 1Y 9Y 9Y 3Y 3Y 3Y 5Y 5Y 5Y

20 20 15 Difference (bps) (bps) Difference Difference (bps) 15 20 10 10 15 5 5 10 0 0 5 -5 -5 0 -10 10,0% 10,0% 10,0% 3Y 3Y -10 -5 -10 3Y 6,0% 6,0% 6,0% 5Y 5Y 4,5% 4,5% 4,5% 7Y 7Y 3,5% 3,5% 3,5% 9Y 9Y

25Y25Y 25Y

2,5% 2,5% 2,5%

12Y12Y 12Y

Maturity Maturity Maturity 7Y

4 FSIRS rates differences: multiple-curve CSA versus market (31 Mar 2010)

Difference (bps) (bps) Difference Difference (bps)

Difference (bps) (bps) Difference Difference (bps)

Maturity Maturity Maturity 7Y

7Y 7Y 7Y 7Y

9Y 9Y 9Y

5Y 5Y

2,0% 2,0% 2,0%

20Y20Y 20Y

11Y11Y 11Y 11Y11Y 11Y

13Y13Y 13Y

1,0% 1,0% 1,0% 30Y30Y 30Y

15Y15Y 15Y

1Y 1Y 1Y 25Y25Y 25Y

5Y

Forward start 9Y

Maturity 7Y

Strike Strike Strike

9Y

Forward start Forward start 20 15 15 20 10 10 15 5 5 10 0 0 5 -5 -5 0 -10

Maturity Maturity

5Y

Cap/floor premia differences: multiple-curve CSA versus market (31 Aug 2010) 20

10,0% 10,0% 10,0%

3Y 3Y

-10 -5 -10

6,0% 6,0% 6,0%

5Y 5Y

4,5% 4,5% 4,5%

7Y 7Y

3,5% 3,5% 3,5%

9Y 9Y

2,5% 2,5% 2,5%

12Y12Y 12Y

9Y 9Y

5Y 5Y

2,0% 2,0% 2,0%

20Y20Y 20Y

13Y13Y 13Y

1,0% 1,0% 1,0% 30Y30Y 30Y

15Y15Y 15Y

3Y

25Y25Y 25Y

1Y 1Y

5Y

Forward start 9Y

Maturity 7Y

Strike Strike Strike

Panels on the left: differences between theoretical FSIRS equilibrium rates and market quotes. Panels on the right: cap/floor options premia differences (light colors: floors, dark colors: caps). Upper panels: they report the results obtained through the single-curve approach. Middle panels: they report the results obtained through the multiple-curve no-CSA approach. Lower panels: they report the results obtained through the multiple-curve CSA approach. Valuation dates: 31st March 2010 for the FSIRS (left side graphs) and 31st August 2010 for cap/floor options (right side graphs). Note that the y-axis scales of the middle and lower graphs on the left hand side have been magnified to better highlight lower price differences than the ones of the upper left graph (source: Reuters).

Figure 2 FSIRS and cap/floor differences (in basis point) from market data

3Y

9Y

5Y

Forward start Forward start

Maturity Maturity

9Y

5Y

1Y

43

Cap/floor premia differences 31st March 2010 Range Standard deviation 6.3 2.1 15.8 31st August 2010 Range Standard deviation 9.7 2.3 2.4

market changes announced in August 2010 [ICAP (2010)] and with our findings for forward start IRS discussed above. First of all, the market, at least since March 2010, has abandoned the classical single-curve pricing methodology, typical of the pre-credit crunch interest rate world, and has adopted the modern multiple-curve approach. Second, the transition to the CSA-discounting methodology for options has happened just in August 2010, thus incorporating into market prices the credit and liquidity effects described above. In the latter case, contrary to FSIRS, both the two modern multiple-curve methodologies (if correctly applied) lead to good repricing of the market premia, because, at constant market premia, the change in the yield curves (switching from Euribor discounting to Eonia discounting) are compensated by the corresponding changes in the Black implied volatilities.

Single-curve Multiple-curve no-CSA Multiple-curve CSA

[-5.8;+14.1] [-7.0;+5.8] [-8.9;+77.7]

[+0.2;+20.0] [-6.3;+7.4] [-6.8;+9.6]

Table 4 Differences (in basis points) from Figure 2

(higher Eonia discount factors) must be compensated with lower values of implied volatility. In conjunction with the market change, we used the Euribor volatility to compute multiple-curve no-CSA prices and the Eonia volatility to compute multiple-curve CSA prices at 31st August 2010. The results for the valuation date 31st August 2010 are shown in Figure 2 and Table 4. We do not graphically report the results on 31st March 2010 because they are not necessary for the purpose of the analysis; all the relevant numbers are contained in Table 4. Overall, we notice again that, on both dates, the single-curve methodology (upper panels) has a very bad performance. The multiple-curve no-CSA methodology (middle panels) has a good performance on both dates, with an absolute average difference of 1.4/1.6 bps over a total of 169 options and a standard deviation of 2.06/2.28 bps. Finally the multiple-curve CSA methodology (lower panels) shows a bad performance on the first date (standard deviation 15.82 bps) and a performance as good as that of the multiple-curve CSA methodology on the second date, with absolute average difference of 1.7 bps and standard deviation of 2.43 bps. We conclude that the results discussed above are coherent with the interest rate market evolution after the credit crunch and, in particular, with the

SABR model calibration


The SABR (stochastic alpha beta rho) model developed by Hagan et al. (2002) is one of the simplest generalizations of the well-known Blacks model with stochastic volatility, preserving Black-like closed formulas for caps, floors, and swaptions, leading to a market coherent description of the dynamics of the volatility and allowing calibration to the interest rate smile. Thanks to its mathematical simplicity and transparent financial interpretation, it imposed itself as the market standard for pricing and hedging plain vanilla interest rate options and to calibrate the market volatility, often called SABR volatility surface (for caps/floors) or cube (for swaptions). Similarly to the Black model, the modern version of the SABR model is obtained from the corresponding classical SABR version of Hagan et al. (2002) just by replacing the classical forward rate with the modern FRA rate and the TK-forward Libor measure associated with the classical single-curve numeraire P(t, Tk) with the modern TK-forward measure

Classical SABR (single-curve) SABR dynamics

Modern SABR (multiple-curve)

SABR volatility

44

Table 5 Classical (top left column) versus modern (top right column) SABR model dynamics and volatility expression consistent with the multiple-curve approach (bottom) [Hagan et al. (2002)]

The Capco Institute Journal of Financial Transformation


Interest Rates After the Credit Crunch: Markets and Models Evolution

Short term (2Y) Euribor 31 Mar 2010


60% 50% 40% 30% 20% 10%
Euribor Impl. Fwd. Vol. Vega-Wght. SABR Vol. Standard SABR Vol. Vega

Short term (2Y) Eonia 31 Mar 2010


0,014 0,012 0,010 Volatility 0,008 Vega 0,006 0,004 0,002 0,000 20% 10% 40% 30% 60% 50%
Eonia Impl. Fwd. Vol. Vega-Wght. SABR Vol. Standard SABR Vol. Vega

0,014 0,012 0,010 0,008 Vega 0,006 0,004 0,002 0% 2% 4% 6% Strike 8% 10% 0,000

Volatility

0%

2%

4%

6% Strike

8%

10%

Medium term (10Y) Euribor 31 Mar 2010


60% 50% 40% 30% 20% 10% 0,014 0,012 0,010 Volatility 0,008
Euribor Impl. Fwd. Vol. Vega-Wght. SABR Vol. Standard SABR Vol. Vega

Medium term (10Y) Eonia 31 Mar 2010


60% 50% 40% 30% 20% 10% 0,014 0,012 0,010 Volatility 0,008
Eonia Impl. Fwd. Vol. Vega-Wght. SABR Vol. Standard SABR Vol. Vega

Vega

Vega

0,006 0,004 0,002 0,000

0,006 0,004 0,002 0,000

0%

2%

4%

6% Strike

8%

10%

0%

2%

4%

6% Strike

8%

10%

The blue dots represents the market implied forward volatility, the red line refers to the standard calibration, the green line refers to the vega-weighted calibration, and the purple line (right y-axis) reports the values of the vega. The graphs on the left are related to the market Euribor implied forward volatility. The graphs on the right are associated with the market Eonia implied forward volatility. Upper panels: smile section with maturity date 2-year. Lower panels: smile section with maturity date 10-year. Valuation date: 31st March 2010 (source: Reuters).

Figure 3 SABR model calibration results

associated with the discounting numeraire Pd(t, Tk). The SABR volatility formula remains unchanged, but takes the FRA rate as input. Cap/floor options are priced as in Table 2 using the standard Blacks formula and input SABR volatility. In Table 5 we show the classical and the modern SABR equations. Using different multiple-curve pricing methodologies, based on different choices of discounting and forwarding yield curves, leads to the definition of two distinct implied volatility surfaces referring to the same collateralized market premia, as discussed above: the Euribor implied term volatility, which is consistent with the multiple-curve no-CSA approach; and the Eonia implied term volatility, which is consistent with the multiple-curve CSA approach. Notice that the SABR model refers to forward (not term) volatilities implied in caplets/floorlets (not caps/floors). We denote with x(t; Tk-1, Tk, K) the implied forward volatility seen at time t of an European caplet/floorlet on the

spot Euribor rate Lx(Tk-1, Tk) and strike K, with x = {Euribor 6M standard, Euribor 6M CSA}. Thus, we stripped the two forward volatility surfaces implied in the cap/floor premia published by Reuters on the 31st March and on 31st August 2010, using the two multiple-curve methodologies above. The stripping procedure requires many technicalities that we do not report here, we refer you to section 3.6 in Brigo and Mercurio (2006). The SABR calibration procedure is applied to each smile section, corresponding to the strip of caplets/floorlets with the same maturity date Tk, underlying FRA rate Fx,k,(t) and different market strikes Kj, j = {1,, 14}.4 Thus the calibration returns the values of the models parameters , , , that minimize the distance between the market implied forward volatilities x Mkt (t;Tk-1,Tk,Kj ) and the corresponding theoretical SABR volatilities x SABR (t;Tk-1,Tk,Kj ) obtained through the closed analytic formula in

14 is the number of strikes quoted in the market.

45

SABR calibration errors 31st March 2010 Implied volatility Euribor Range Standard deviation 0.0003 0.0003 Implied volatility Eonia Range Standard deviation 0.0003 0.0002 31st August 2010 Implied volatility Euribor Range Standard deviation 0.0004 0.0004 Implied volatility Eonia Range Standard deviation 0.0004 0.0004

Standard calibration Vega-weighted calibration

[-0.2%;+0.1%] [-0.2%;+0.1%]

[-0.1%;+0.1%] [-0.1%;+0.1]

[-0.3%;+0,2%] [-0.1%;+0.1]

[-0.3%;+0.2%] [-0.1%;+0.1%]

For each calibration procedure (standard and vega-weighted) and for each valuation date (31st March and 31st August 2010), we report the range of minimum and maximum calibration errors and the standard deviation of the errors (equally-weighted for standard calibration and vega-weighted for vega-weighted calibration).

Table 6 SABR model calibration errors over all the market volatility smile

Table 5. Thus we obtain a set of SABR parameters for each smile section. For the two dates (31st March and on 31st August 2010) and the two pricing methodologies (multiple-curve no-CSA and multiple-curve CSA) associated with the two corresponding forward volatility surfaces (Euribor, Eonia), we performed two minimizations using two distinct error functions:

longer term and for the 31st August 2010 because they would lead to the same considerations. However, in Table 6 we compare the two calibration approaches on both the two valuation dates reporting the most important numbers: the range of minimum and maximum errors and the standard deviation. Overall, the SABR model performs very well at both dates with both pric-

a standard error function defined as the square root of the sum of the square differences between the SABR and the market forward volatilities: Errorstd(Tk)={ 14 [ x (t;Tk-1,Tk,Kj) Mkt j=1 x SABR (t;Tk-1,Tk,Kj )]
2

ing methodologies. In particular, we notice that in the short term (twoyear, upper panels in Figure 3) the standard SABR calibration (red line) seems, at first sight, closer to the market volatility (blue dots) and to bet(1) ter replicate the trend in the OTM regions. However, a closer look reveals that there are significant differences in the ATM area, where even small calibration errors can produce sensible price variations. Instead, the vega-weighted SABR calibration (green line) gives a better fit of the market volatility smile in the ATM region, in correspondence to the maximum vega sensitivity, and allows larger differences in the OTM regions where the vega sensitivity is close to zero. Thus the vega-weighted calibration provides a more efficient fit in the volatility surface regions that are critical for option pricing. The effects are less visible for the long term (middle panels in Figure 3) because of the higher vega sensitivity in the OTM regions. Both the standard and the vega-weighted approaches lead to similar results in terms of range of minimum and maximum errors and standard deviation (Table 6). In particular, the standard deviation measures of the errors over the 30-year term structure are almost the same. This is due to the fact that the two calibrations differ only in the short term (up to four years) and using a vega-weighted minimization can ensure a better fitting of the market data, as shown in the upper panels of Figure 3. We conclude that the SABR model is quite robust under generalization to the modern pricing framework and can be applied to properly fit the new dynamics of the market volatility smile and to price off-the-market options coherently with the new market practice.

a vega-weighted error function: Errorvw(Tk)={14 Mkt(t;Tk-1,Tk,Kj) - x SABR (t;Tk-1,Tk,Kj ))Wj,x]2} j=1[( x (2) (Tk,Kj) 14 (Tk,Kj)
j=1

where

Wj,x=

and (Tk,Kj) is the Blacks vega sensitivity of the caplet/floorlet option with strike Kj, FRA rate Fx,k(t), and maturity Tk. Weighting the errors by the sensitivity of the options to shifts of the volatility allows us, during the calibration procedure, to give more importance to the near-ATM areas of the volatility surface, with high vega sensitivities and market liquidity, and less importance to OTM areas, with lower vega and liquidity. The initial values of , , and were respectively 4.5 percent, -10 percent, and 20 percent. Different initializations gave no appreciable differences in the calibration results. According to Hagan et al. (2002) and West (2005), in the calibration of the model we decided to fix the value of the redundant parameter to 0.5. The minimization was performed using the builtin Matlabs function patternsearch. A snapshot of the SABR calibration results for the 31st March 2010 is shown in Figure 3 where we report two smile sections at short-term (two-year maturity) and mid-term (ten-year 46 maturity). We do not include here the graphs of the calibration results for

Conclusion
In this work we have presented a quantitative study of the markets and models evolution across the credit crunch crisis. In particular, we have focused on the fixed income market and we have analyzed the most relevant literature regarding the divergences between Libor versus OIS rates, between FRA versus forward rates, the explosion of basis swaps spreads, and the diffusion of collateral agreements and CSA-discounting, in terms of credit and liquidity effects. These market frictions have induced a segmentation of the interest rate market into sub-areas, corresponding to instruments with risky underlying Libor rates distinct by tenors, and risk free overnight rates, and characterized, in principle, by different internal dynamics, liquidity, and credit risk premia reflecting the different views and preferences of the market players. In response to the crisis, the classical pricing framework, based on a single yield curve used to calculate forward rates and discount factors, has been abandoned, and a new modern pricing approach has prevailed among practitioners, taking into account the market segmentation as an empirical evidence and incorporating the new interest rate dynamics into a multiple curve framework. The latter has required a deep revision of classical no-arbitrage pricing formulas for plain vanilla interest rate derivatives, now funded on the risk neutral measure associated to the risk free bank account and on the martingale property of the FRA rate under such measure. In particular, we have reported the multiple-curve generalization of the SABR model, the simplest extension of the well-known Blacks model with stochastic volatility, routinely used by market practitioners to fit the interest rate volatility smile and to price vanilla caps/floors and swaptions. We have reported the results of an empirical analysis on recent market data comparing three different pre- and post-credit crunch pricing methodologies and demonstrated the transition of the market practice from the classical to the modern pricing framework. In particular, we have proved that in the case of interest rate swaps the markets have, since March 2010, abandoned the classical single-curve pricing methodology, typical of the pre-credit crunch interest rate world, and instead adopted the modern multiple-curve CSA approach, thus incorporating into market prices the credit and liquidity effects. The same happened with European caps/floors, with the full transition to the CSA-discounting methodology deferred until August 2010. Finally, we have proved that the SABR model is quite robust under generalization to the modern pricing framework and can be applied to properly fit the new dynamics of the market volatility smile and to price non-quoted off-the-market options coherently with the new market practice. The work presented here is a short step in the long-run theoretical refundation of the interest rate modeling framework in a post-crisis financial world, with Libor rates incorporating credit and liquidity risks. We believe that such risks and the corresponding market segmentation expressed by large basis swap spreads will not return as negligibly as they were in

the pre-crisis world, and will be there also in the future, exactly as the volatility smile has been there since the 1987 market crash. Expected future developments will also consider the extension of pre-crisis pricing models to the multiple-curve world with stochastic basis, and the pricing of non-collateralized OTC derivatives including consistently the bilateral credit risk of the counterparties in the form of credit value adjustment (CVA) and debt value adjustment (DVA), and the liquidity risk of the lender in the form of liquidity value adjustment (LVA) [Bianchetti and Morini (2010)].

References
Acerbi, C. and G. Scandolo, 2007, Liquidity risk theory and coherent measures of risk, SSRN working paper, http://ssrn.com/abstract=1048322 Ametrano, F. M., 2011, Rates curves for forward Euribor estimation and CSA-discounting, QuantLib Forum, London, 18 January Ametrano, F. M. and M. Bianchetti, 2009, Bootstrapping the illiquidity: multiple curve

construction for coherent forward rates estimation, in Mercurio, F., ed., Modelling interest rates: latest advances for derivatives pricing, Risk Books, London Amin, A., 2010, Calibration, simulation and hedging in a Heston Libor market model with stochastic basis, SSRN working paper, http://ssrn.com/abstract=1704415 Bianchetti, M., 2010, Two curves, one price, Risk Magazine, 23:8, 66-72 Bianchetti, M., 2011, Switching financial institutions to marking to market under the CSA regime, presentation at Global Derivatives, April Bianchetti, M. and M. Morini, 2010, Interest rate after the credit crunch, WBS Course, 25-26 Oct. Milan Black, F. and M. Scholes, 1973, The pricing of options and corporate liabilities, The Journal of Political Economy, 81:3, 637-654 Brigo, D. and F. Mercurio, 2006, Interest rate models: theory and practice, 2nd edition, Springer, Heidelberg Burgard, C. and M. Kjaer, 2010, PDE representation of options with bilateral counterparty risk and funding cost, SSRN working paper, http://ssrn.com/abstract=1605307 Chibane, M. and G. Sheldon, 2009, Building curves on a good basis, SSRN working paper, http://ssrn.com/abstract=1394267 Derman, E. and I. Kani, 1994, The volatility smile and its implied tree, Risk Magazine, 7:2, 139-145 Fries, C. P., 2010, Discounting revisited: valuation under funding, counterparty risk and collateralization, SSRN working paper, http://ssrn.com/abstract=1609587 Fujii, M. and A. Takahashi, 2011, Choice of collateral currency, Risk Magazine, 24:1, 120-125 Fujii, M., Y. Shimada, and A. Takahashi, 2009a, A survey on modelling and analysis of basis spread, CARF working paper, CARF-F-195, available from http://ssrn.com/abstract=1520619 Fujii, M., Y. Shimada, and A. Takahashi, 2009b, A market model of interest rates with dynamic basis spreads in the presence of collateral and multiple currencies, CARF working paper, CARF-F-196, available from http://ssrn.com/abstract=1520618 Fujii, M., Y. Shimada, and A. Takahashi, 2010a, A note on construction of multiple swap curves with and without collateral, CARF working paper, CARF-F-154, available from http://ssrn.com/ abstract=1440633 Fujii, M., Y. Shimada, and A. Takahashi, 2010b, Collateral posting and choice of collateral currency implications for derivative pricing and risk management, CARF working paper, CARF-F-216, available from http://ssrn.com/abstract=1601866 Giura, R. 2010, Banca IMI swap trading, private communication Hagan, P. S., D. Kumar, A. S. Lesniewski, and D. E. Woodward, 2002, Managing smile risk, Wilmott Magazine, July, 84-108 Henrard, M., 2007, The irony in the derivative discounting, SSRN working paper, http://ssrn. com/abstract=970509 Henrard, M., 2009, The irony in the derivative discounting part II: the crisis, SSRN working paper, http://ssrn.com/abstract=1433022 Hull, J., 2010, Options, futures and other derivatives, 7th edition, Prentice Hall ICAP, 2010, communications to clients, 11 Aug. 2010 and 15 Sep. 2010 ISDA, 2010, ISDA margin survey 2010, available form http://www.isda.org Kenyon, C., 2010, Post-shock short-rate pricing, Risk Magazine, 23:11, 83-87 Kijima, M., K. Tanaka, and T. Wong, 2008, A multi-quality model of interest rates, Quantitative Finance, 9:2, 133-145 Mercurio, F., 2009, Interest rates and the credit crunch: new formulas and market models, SSRN working paper, http://ssrn.com/abstract=1332205

47

Mercurio, F., 2010a, LIBOR market model with stochastic basis, SSRN working paper, http:// ssrn.com/abstract=1583081 Mercurio, F., 2010b, Modern Libor market models: using different curves for projecting rates and for discounting, International Journal of Theoretical and Applied Finance, 13:1, 113-137 Mercurio, F. and M. Morini, 2009, Joining the SABR and Libor models together, Risk Magazine, 22,:3, 80-85 Merton, R. C., 1973, Theory of rational option pricing, The Bell Journal of Economics and Management Science, 4:1, 141-183 Michaud, F. and C. Upper, 2008, What drives interbank rates? Evidence from the Libor panel, BIS Quarterly Review, March Moreni, N. and A. Pallavicini, 2010, Parsimonious HJM modelling for multiple yield-curve dynamics, SSRN working paper, http://ssrn.com/abstract=1699300 Morini, M., 2009, Solving the puzzle in the interest rate market (part 1 and part 2), SSRN working paper, http://ssrn.com/abstract=1506046 Morini, M. and A. Prampolini, 2010, Risky funding: a unified framework for counterparty and liquidity charges, SSRN working paper, http://ssrn.com/abstract=1669930 Pallavicini, A. and M. Tarenghi, 2010, Interest-rate modelling with multiple yield curves, SSRN working paper, http://ssrn.com/abstract=1629688 Piterbarg, V., 2010a, Funding beyond discounting: collateral agreements and derivative pricing, Risk Magazine, 23:2, 97-102 Piterbarg, V., 2010b, Effects of funding and collateral, Global Derivatives & Risk Management Conference, May 2010, Paris Tuckman, B. and P. Porfirio, 2003, Interest rate parity, money market basis swap, and cross-currency basis swap, Lehman Brothers Fixed Income Liquid Markets Research LMR Quarterly, 2004, Q2 West, G., 2005, Calibration of the SABR model in illiquid markets, Applies Mathematical Finance, 12:4, 371-385

48

PART 1

Fat Tails and (Un)happy Endings: Correlation Bias, Systemic Risk and Prudential Regulation
Jorge A. Chan-Lau International Monetary Fund and The Fletcher School, Tufts University1

Abstract
The correlation bias refers to the fact that claim subordination in the capital structure of the firm influences claim holders preferred degree of asset correlation in portfolios held by the firm. Using the copula capital structure model, it is shown that the correlation bias shifts shareholder preferences towards highly correlated assets. For financial institutions, the correlation bias makes them more prone to fail and raises the level of systemic risk given their interconnectedness. The
1 The views presented in this paper are those of the author and do not necessarily reflect those of the IMF or IMF policy. The paper benefits from comments by Stijn Claessens, Dora Iakova, Robert Rennhack, and Marcos Souto. The author is responsible for any errors or omissions.

implications for systemic risk and prudential regulation are assessed under the prism of Basel III, and potential solutions involving changes to the prudential framework and corporate governance are suggested.

49

A firm or financial institution that holds a portfolio of diverse projects and/or assets is subject to correlation risk, or the risk arising from the correlation of cash flows accrued to the different projects/assets in the portfolio. Most firms and financial institutions finance their portfolios using a mix of claims, such as equity and debt, where each claim is differentiated by its seniority in the capital structure; for example, equity is subordinated to debt. The correlation bias is the preference of different claim holders on the firm for different levels of projects/asset correlation in the firms portfolio. In particular, junior claim holders prefer portfolios where assets are highly correlated while senior claim holders would prefer a more uncorrelated portfolio. Since the control of the firm is usually exercised by managers that tend to act on behalf of the most junior claim holders, shareholders, the choice of portfolio assets would tend to be biased towards highly correlated assets. This bias leads to portfolio outcomes characterized by fat tails that increase the likelihood of observing scenarios with extreme upside and downside risks. In particular, the lack of diversity in the firms portfolio increases the likelihood that it may fail since all the assets in the portfolio will be equally affected by a negative shock. The implications of the correlation bias are not circumscribed to individual institutions though. In a financial system where the correlation bias of junior claim holders is dominant and herd behavior prevalent, it would not be rare to observe black swan events often following on the heels of extended periods of tranquility [Taleb (2009)]. The stronger the bias of junior claim holders the more likely the financial system will oscillate between extreme periods of tranquility and financial disruption, contributing to increased procyclicality in the event of negative shocks. This is not just a mere theoretical implication: fat tail tales and their unhappy endings were dramatically illustrated by the recent global financial crisis originated by problems in the U.S. subprime mortgage market. This paper argues that a copula approach to the capital structure, building on the copula pricing model first developed to analyze structured credit products, provides the right framework for understanding the correlation bias arising from the capital structure of the firm. Furthermore, the copula capital structure model is a natural generalization of the contingent claim approach to the capital structure of the firm first proposed by Black and Scholes (1973) and Merton (1974). Insights on the correlation bias derived from the copula pricing model are useful to understand how the bias interacts with systemic risk and whether recent financial regulatory reform could address these interactions effectively.

Payoff

Debt Equity

Asset value of the firm

Asset value of the firm

Figure 1 Payoff schedules of equity and debt

analyzing the capital structure of the firm. The approach sheds light on the potential conflicts of interest between shareholders and debt holders. Specifically, the payoff of equity, as illustrated in Figure 1, resembles the payoff of a call option where the underlying is the asset value of the firm and the strike price is determined by what is owed to debt holders. Ceteris paribus, shareholders, who hold a long asset volatility position, benefit from projects and/or portfolios that increase the volatility of the firms asset value, as increased volatility increases the value of the call option. Shareholders, hence, exhibit a high volatility bias while the opposite is true for debt holders. While useful for recognizing conflicts of interest between different claim holders, the Black-Scholes-Merton capital structure model does not account for the fact that the asset value of the firm is determined by the portfolio of projects and assets held by the firm. The volatility of the profit/loss of the firm, its equity returns, and its asset value are determined not only by the volatility of individual projects but on their mutual dependence (or correlation). Accounting for the correlation in the firms project/asset portfolio requires an understanding of how correlation affects different claims on the firm, which requires setting up an appropriate analytical framework.

Tail risk and correlation


Nevertheless, it can be shown that the basic intuition of the BlackScholes-Merton capital structure model, that shareholders benefit from increased volatility of the asset value of the firm, extends to the case of a firm undertaking a portfolio of several projects. More importantly, shareholders can increase the volatility of the firm by increasing the correlation among the different projects in the firms portfolio. To illustrate this point, assume a firm that can choose between the following two-project portfolios, each one requiring an initial investment of 100 per project. The first portfolio contains uncorrelated projects A and B. The probability of success and failure of each project is 0.5. For simplification, it is further assumed a zero rate of return. In case of success, the project returns the original investment of 100, and in the case of failure the original investment is lost. The second portfolio contains projects C

Understanding the correlation bias


The basic contingent claim model
Black and Scholes (1973) and Merton (1974), by noting that the payoffs to equity and debt were equivalent to options on the asset value of the 50 firm, established the foundations of the contingent claim approach for

The Capco Institute Journal of Financial Transformation


Fat Tails and (Un)happy Endings: Correlation Bias, Systemic Risk and Prudential Regulation

Probability

Portfolio 1

Probability

Portfolio 2

framework to evaluate issues related to portfolio correlation, systemic risk, and the correlation bias. To observe this, note that as in the case of a securitization or structured product, the total asset value of the firm depends on cash flow from its different projects, or in the case of a bank, its bank and trading portfolios. In turn, the value of each corresponding claim depends on its priority over the cash flow determined by its senior-

0,50

0,50

0,25

0,25

-100

100

Pro t/loss

-100

100

Pro t/loss

ity in the capital structure. Figure 3 shows the analogy between the firm and a structured product. As in the case of structured products, senior debt is protected by the buffer consisting of, in a first instance, equity, and in a second instance subordinated debt, which in that order absorb any loss incurred by the firm or financial institution.

Figure 2 Profit/loss of hypothetical two-project portfolios

Senior tranche Mezzanine tranche Equity tranche

Senior debt Subordinated debt Equity

The observation above suggests that the copula pricing model, as originally proposed by Vacisek (1977) and specialized to the Gaussian copula by Li (2000), could be the natural framework for analyzing conflicts of interest between shareholders and debt holders. The analysis of the capital structure based on the copula pricing model, henceforth, would be referred to as the copula capital structure model. In the copula approach,

Tranched equity product

Firm

Figure 3 The analogy between the capital structure of a tranched structured product and the capital structure of the firm

the value of the projects/assets included in the firms portfolio is determined by a common factor and a project/asset idiosyncratic factor. The correlation of the assets is determined by the correlation of each asset with the common factor. This framework can be easily generated to mul-

and D, with similar characteristics as projects A and B in the first portfolio. The only difference is that projects C and D are uncorrelated. The three potential profit/loss scenarios are that the project portfolio loses 100, breaks even at 0, or gains 100. Figure 2 shows the probability distribution of each scenario for both portfolios. The capital structure financing the portfolio and who controls the firm influence the degree of correlation of the assets in the portfolio. The first and second portfolios both have an expected value of 100. If the project portfolio is financed only with equity, the shareholders would prefer the first portfolio, as its standard deviation is smaller and the probability of losing money is only 25 percent. Once the project portfolio is financed partly with debt, as the share of debt increases the stronger the incentives for the shareholders to choose the second portfolio, as they would accrue positive returns only in the scenario where both projects are succesful. For instance, for a 50-50 mix of debt and equity, portfolio 1 yields shareholders an expected profit/loss of 25 while portfolio 2 delivers twice that amount, 50. Shareholders, hence, would prefer portfolio 2 even though the odds that the debtholders would suffer a total loss is 50 percent compared to 25 percent in portfolio 1. In contrast, bondholders would prefer portfolio 1 to portfolio 2.

tifactor copula models.2 Furthermore, it can be shown that the copula capital structure model is a natural extension of the contingent claim approach of Black and Scholes (1973) and Merton (1974) since the value of equity is determined by a portfolio of options in several projects, or that each tranche comprises long and short options on the value of the portfolio [Rajan et al. (2007)]. The natural advantage vis--vis the contingent claim approach is that the copula approach naturally accommodates several types of claims and the fact that firms hold portfolios comprising multiple projects and assets. The main insight from the copula approach to the capital structure is that it makes explicit the correlation bias, or differences in claim holders preferences for the degree of correlation among the different projects/assets held by the firm. The particular instance of the correlation bias that leads to higher idiosyncratic and systemic risk is that associated with the correlation bias of shareholders since they prefer portfolios with highly correlated assets. The copula capital structure model explains how different factors affect

The copula capital structure model and the correlation bias


This section will exploit the analogy between the capital structure of the firm and structured and securitized products to explain how structured credit pricing models can serve as an analytical and conceptual

For a concise exposition, see Andersen et al. (2003), Gibson (2004), Hull and White (2004), and Chapter 9 in Lando (2004). Coval et al. (2009) offer an accessible introduction. Rajan et al. (2007) is textbook treatment by market practitioners. The reader should bear in mind, though, that the insights from the copula benchmark model are not restricted to the particular distributional assumptions of the model but extend to other copula models, such as Student-t, semi-parametric, and non-parametric copulas.

51

Value Senior debt

Value Senior debt

regulation of financial institutions, relates to how subordinated debt reacts to changes in correlation. The sensitivity of subordinated debt to project/asset correlation is non-monotonic: at low levels of correlation,
Subordinated debt

Subordinated debt

the value of subordinated debt declines but after a certain threshold is reached, its value starts increasing. Why does subordinated debt exhibit a non-monotonic relationship with correlation? The answer is as follows:
Correlation

Equity

Equity

Project riskiness

subordinated debt becomes the loss absorbing buffer once the equity buffer is exhausted in case of large portfolio losses. For very low levels of projects/asset correlation, losses are small and are fully absorbed by equity. As correlation increases beyond certain threshold, the losses could potentially exceed the equity buffer forcing partial losses in the value of subordinated debt. Below the threshold, it is in the best interest of sub-

Figure 4 Sensitivity of corporate claims value to the riskiness of a single project and to portfolio correlation

the value of the different claims in the capital structure of the firm. One such factor is the riskiness of the individual projects as proxied by their probability of success. An improvement on the odds of success benefits all claims as long as the payoff of a successful project is independent of the riskiness of the project (Figure 4). If the payoff of the project/assets increases with the level of risk, the standard result from the Black-ScholesMerton contingent claim model applies: equity shareholders benefit when the firm undertakes high risk-high return projects. The reason for this result, again, is due to the convexity of the shareholders payoff which gains the most from upside risks. In contrast, ceteris paribus, an increase in the correlation of the cash flows of the different projects (or assets) and the corresponding increase in the risk correlation of the projects (or assets) benefit shareholders but not senior debt holders. The intuition underlying this result is as follows. Increased risk correlation leads to outcomes where either the majority of projects in the portfolio succeed or fail. In the extreme case of perfectly correlated projects or assets, the outcome is binary, either the portfolio succeeds or it fails. The downside to shareholders from outcomes close to the binary case is limited to their equity stake in the firm, which only finances part of the total portfolio or assets of the firm. Shareholder, thus, are indifferent to all scenarios where losses exceed the equity of the firm. The downside scenarios are accompanied by upside scenarios where, due to the high correlation among different projects or assets, the portfolio bears minimal or no losses at all. As in the case of the Black-ScholesMerton contingent claim model, the upside scenarios benefit shareholders more than debtholders due to the convexity of the payoff structure of the former: they accrue all the gains in excess of what is owed to debtholders. The nature of the payoffs associated with the downside and upside scenarios provide incentives to shareholders to bias the firms portfolio towards highly correlated projects and/or assets. In contrast, senior debt holders have the opposite bias and would rather prefer that the assets/projects held by the firm exhibit low correlation.

ordinated debt holders to keep project/asset correlation as low as possible to maximize the value of their stake in the firm since low correlation minimizes their partial losses. Once the correlation threshold is crossed, however, the loss scenarios would likely imply full losses to subordinated debt holders and would bias their preference towards highly correlated projects/assets since they will maximize the likelihood of a sufficiently low number of failed projects. The bias of junior claim holders towards highly correlated assets and projects is determined by the relative size of equity and subordinated debt in the capital structure of the firm.3 The smaller the size of these subordinated claims (or tranches in structured credit parlance), the stronger the incentive for subordinated claim holders to opt for high correlation on the asset side of the balance sheet. In the case of subordinated debt, the smaller the buffer provided by equity, the lower the threshold correlation would be.

Correlation bias, systemic risk, and prudential regulation


The correlation bias, in combination with corporate control, could potentially induce high correlation in the asset and trading portfolios of individual institutions, raising the likelihood of their failure. Since excess portfolio correlation at the individual firm level could translate into systemic risk at the aggregate level it becomes important to assess whether the problems induced by the correlation bias could be addressed through corporate governance and prudential regulation.

Systemic risk
The design of a prudential regulatory framework should account for the correlation bias exhibited by the different claim holders. On the one hand, if the correlation bias of junior stakeholders dominates the choice of projects and assets included in the firm portfolio, the likelihood that the firm will fail increases. On the other hand, the correlation bias induces both

52

Another important result, which also bears on the design of prudential

For a rigorous but accessible derivation of this a result, see Gibson (2004) and Coval et al. (2009).

The Capco Institute Journal of Financial Transformation


Fat Tails and (Un)happy Endings: Correlation Bias, Systemic Risk and Prudential Regulation

upside and downside fat-tail risk in the portfolio, which could lead to wild swings from periods of tranquility to periods of turmoil. Furthermore, the excess portfolio correlation is compounded by the well documented procyclical behavior of asset correlation which tends to increase during periods of turmoil.
4

One potential solution for reducing correlation bias risk involves changing the corporate governance of financial institutions to shift some corporate control to claim holders other than shareholders. The simpler way to enable debt holders partial control on the firm would be to require them to hold also some equity in the firm. In the real world it is not uncommon to find that type of structure in bank-based financial systems in Japan and continental Europe, where banks are both lenders to and equity owners in other corporations.8 Whether banks portfolios in these countries are less correlated than those based in market-based, arms-length financial systems, such as as those in the U.K. and the U.S., is an open empirical question. Absent substantial changes in corporate governance structures and corporate control, the monitoring role of debt holders must be overtaken by prudential regulation. As argued in Dewatripont et al. (2010), the debt holder base of financial institutions could be too diversified and lack the required expertise to monitor or exercise control efficiently. Moreover, for institutions deemed systemic, there are explicit or implicit guarantees like deposit insurance that work against close monitoring of the financial institution. Under such circumstances, prudential regulation needs to fill the monitoring role of debt holders.

Systemic risk would not be an issue if the failure of a financial institution were an isolated event. The interconnected nature of the financial system, however, raises the risk that an individual failure could have a domino effect on other institutions and prompt serious disruptions across different markets. The interconnectedness nature of the financial system arises from direct linkages between the different market participants, such as the cross-institutions claims, and from indirect linkages such as exposure to common risk factors and feedback effects increased volatility and declining prices from the use of similar accounting and risk management practices. Furthermore, the failure of one bank could lead to a bank panic and a run on the banking system even in the absence of direct or indirect linkages. Besides interconnectedness, the impact of the correlation bias on systemic risk can be compounded by herding behavior among financial institutions. Herding behavior, which is rather common in financial markets, increases the chances of facing a too-many-to-fail (TMTF) problem since a single adverse shock could prompt the failure of several institutions holding similar portfolios biased towards highly correlated assets [Acharya and Yorulmazer (2007)]. Herding behavior could be prompted by several factors, such as reputational issues that cause institutions to mimic each others investment behavior.5 Similarly, the trading strategies of different institutions are likely to converge to specific trades, overcrowding them, and raising the possibility of generalized losses in specific sectors of the financial system. The losses incurred by credit correlation traders in 2005 and quantitative alternative asset managers in 2007 are two recent examples of this problem. The discussion above suggests that reducing systemic risk from correlation bias requires corporate governance structures and regulatory requirements aimed at reducing correlation in the asset portfolio of financial institutions.

Prudential regulation
The 2008-9 global financial crisis has prompted a number of reform initiatives to the prudential regulation of banks and financial institutions, including the use of contingent capital, increases in minimum capital requirements, the imposition of systemic risk capital charges, and requiring originators of structured and securitized products to have some skin in the game to align their incentives with those of investors in securitized notes. The relative merits of these initiatives, from the perspective of the correlation bias, are explored in detail.

Corporate governance
Correlation bias risk is present when corporate control is exercised by shareholders (or management acting on behalf of shareholders in the absence of informational asymmetries). In the particular case of banks, the share of equity in the capital structure is relatively small since, in the presence of asymmetric information, high leverage is the market solution for dealing with governance problems, as noted by Kashyap et al. (2008). But as noted above, high leverage would bias the bank even more towards highly correlated assets in its portfolio.7
8
6

See, for example, Hartmann et al. (2004). The increase in asset price correlation during periods of turmoil is prompted by several factors, including liquidity shortages due to fire sales prompted by mark-to-market and risk management practices [Brunnermeier and Pedersen (2008), Shin (2008)]. In general, the financial system is characterized by procyclicality (Borio et al. (2001)], which suggest the need for counter-cyclical regulation [Brunnermeier et al. (2009)]. See Scharfstein and Stein (1990) for an early reference to herding behavior among mutual fund managers and Bikhchandani and Sharma (2001) for a more recent survey. On herding behavior and fat-tails, see Nirei (2006). Agency problems between managers and shareholders can be accommodated into the copula approach by noting that managers would be the most junior claimants on the cash flows of the firms portfolio, and would be even more biased than shareholders towards highly correlated projects and/or assets. In emerging market countries, domestic banks are usually controlled by families or financial conglomerate holdings that own a substantial majority of shares. The low leverage of these banks works against correlation bias risk but family or financial conglomerates-controlled firms may raise issues related to the protection of minority shareholders rights. See Allen and Gale (2000). An added benefit from shared corporate control is that, besides reducing the correlation bias risk, it also helps reduce the agency cost of debt, or the underinvestment problem [Chan-Lau (2001)]. It should be borne in mind that bankbased financial systems, while effective in reducing the correlation bias problems and underinvestment, may generate other problems, such as overinvestment and inefficient liquidation [Allen and Gale (2000)].

53

Contingent capital and hybrid securities


Academics, policy makers, and market practitioners have argued that the use of contingent capital and/or hybrid securities could help reduce toobig-to-fail (TBTF) risk [Corrigan (2009), Dudley (2009), Strongin et al. (2009), Tarullo (2009), Tucker (2009), French et al. (2010), and BCBS (2010a)]. Under this proposal, the capital structure of systemic financial institutions should include two tranches of subordinated claims. The first and most junior tranche would be common equity, and the second most junior tranche would be subordinated debt which will convert to equity once a pre-established, pre-insolvency threshold is crossed, i.e., a decline in the ratio of common equity or regulatory capital to risk-weighted assets ratio. There are at least two strong arguments for the use of contingent capital. The first is that during distress periods, contingent capital facilitates an orderly recapitalization of a bank and/or financial institution, especially under circumstances when accessing capital markets and/or obtaining equity capital injections are difficult. The second is that the risk of dilution during periods of distress would provide shareholders with incentives to avoid excessive risk taking. At the same time, by removing ambiguity about a potential bailout of subordinated creditors in case of an institution failure, holders of convertible subordinated debt will have strong incentives to price risks correctly. In turn, more reliable prices would have a signalling and disciplinary effect on financial institutions.

from 2 percent, and adding a conservation buffer of 2 percent on top of it, raising total common equity ratio to 7 percent of RWA. National authorities could also impose a countercylical macroprudential common equity overlay of up to 2 percent of RWA. Tier-1 capital requirements are increased to 6 percent, and the minimum total capital requirement remains equal to 8 percent of RWA (Table 1). In principle, increasing the share of common equity in the capital structure of a bank helps reduce the bias towards highly correlated assets. But the increase in common equity is relative to risk-weighted assets rather than to the total assets of the firm which creates a loophole for banks wishing to exploit the correlation bias. Banks, by concentrating their portfolio on highly correlated low risk-weighted assets could satisfy both the minimum and required capital requirements while reducing common equity and increasing leverage.9 The introduction of a non-risk based leverage ratio requiring Tier-1 assets of no less than 3 percent of non-weighted assets plus off-balance sheet structures could contribute to limiting the build up of leverage. Nevertheless, since Tier-1 assets comprise assets other than common equity, the leverage ratio may not be enough to address the correlation bias risk posed by shareholders correlation preferences.

Systemic risk capital charges


While contingent capital could be useful for ensuring that banks will be able to comply with minimum regulatory capital requirements under severe circumstances, it cannot address the correlation bias problem successfully. From a functional perspective, contingent capital and hybrid securities can be classified as subordinated debt. Moreover, the convertibility feature forces contingent capital and hybrid securities to resemble equity more closely and induces a stronger bias towards a highy correlated portfolio than in the case of plain subordinated debt. From the perspective of shareholders, the incentives from the equity dilution effect from exercising the convertibility option are offset by the fact that the option increases the subordination of equity in the capital structure. The more subordinated equity is, the stronger the incentives to gamble on increased volatility, including increasing the asset correlation in the banking and trading books. There are several difficulties, however. One is related to the measurement of how much each bank contributes to systemic risk. The risk measures are based on market measures including but not limited to markto-market firm value and profit/loss statements, measures based on the prices of equity and bonds or credit default swap spreads, or measures based on a combination of balance sheet data and market prices such as distance-to-default, Moodys KMV expected default frequencies, and Altman Z-scores [Adrian and Brunnermeier (2009), Chan-Lau (2009), and Another reform proposal contemplates imposing systemic capital charges, or capital charges, proportional to the contribution of each banks contribution to systemic risk.10 While systemic capital charges do not address the correlation bias directly, if the charges reflect how the failure of one bank would spillover to other banks, they could provide incentives for the bank to reduce its default risk, which should be reflected in a relatively diversified banking and trading book.

Minimum capital requirements


In September 2010, the Basel Committee on Banking Supervision (BCBS) announced higher global capital minimum standards for commercial banks following a number of recommendations aimed at revising and strengthening the prudential framework [BCBS (2009, 2010b) and Caruana (2010)]. The specific recommendations are to enhance the capital quality by imposing a stricter definition of common equity, for example core capital, and requiring more capital, by raising the minimum com54 mon equity requirements to 4 percent of risk-weighted assets (RWA)

For a simple account of the problems created by the use of risk-weights, seeTriana (2010), The Economist (2010). 10 On systemic risk charges, one of the pioneering papers is Acharya (2001). The recent global financial crisis has spurred work in this area, including, among others, Adrian and Brunnermeier (2009), Chan-Lau (2010), Gauthier et al. (2010), Lester et al. (2008), Tarashev et al. (2010). See also Chapter 13 in Acharya and Richardson (2009), and Brunnermeier et al. (2009).

The Capco Institute Journal of Financial Transformation


Fat Tails and (Un)happy Endings: Correlation Bias, Systemic Risk and Prudential Regulation

In percent of riskweighted assets Common equity Minimum Conservation buffer n.a. 2.5 Required

Capital requirements

Additional macroprudential overlay Total capital Minimum Required Countercyclical buffer Required

Tier 1 capital Minimum Required

Basel II Basel III Source: Caruana (2010)

2.0 4.5

n.a. 7.0

4.0 6.0

n.a. 8.5

8.0 8.0

n.a. 10.5

n.a. 0 2.5

Table 1 Basel II and Basel III capital requirements

Lester et al. (2008)]. For the systemic risk capital charges to work, the market-based measures need to capture potential spillovers reliably, but it is often the case that markets fail to price risk correctly. Moreover, as described above, the correlation bias of shareholders also induces upside fat-tail risk that could be reflected in long periods of tranquility. If the data available for calculating the systemic risk charge spans such a period, and periods of turmoil have yet to be realized, the systemic risk charge could understimate spillover risks. The global nature of systemic financial institutions requires the harmonization of systemic capital charges across different jurisdictions, a feat that could be difficult to accomplish. Finally, the ever evolving nature of investment and trading strategies suggest that the nature of spillovers, and in turn, a banks contribution to systemic risk is constantly changing, therefore banks that are currently deemed relatively safe from a systemic risk perspective may not be so going forward.

The recent performance of structured vehicles during the 2008-9 crisis suggests that the previous argument rings true. Standard market practice in tranched securitizations and structured products is for the originating bank to retain the equity tranche to assure investors in the more senior tranches that the structured vehicle is safe. Additional safeguards to senior claims include a principal and interest payment waterfall structure which directs first the cash flows from the collateral assets to the payment of principal and interests to senior claims, effectively reducing the maturity of the senior claims [Rajan et al. (2007)]. None of these safeguards, however, addressed the correlation bias incentives of the originating bank. From the perspective of the copula pricing model framework it is not surprising that, despite these safeguards, the correlation among the collateral assets was way higher than the one used to price and manage the risk of the structured vehicles. While model risk and lack of historical data could also be singled out as responsible for the low correlation estimates used to model the prices of structured vehicles, the recent indictment of an investment bank by the U.S. Securities and Exchange Commission suggests that gaming model parameters such as correlation to benefit a certain group of claim holders is not at all unusual.12

Skin-in-the-game measures
The dramatic implosion of the structured credit market in 2008-9, especially for tranched residential mortgage-backed securities, prompted initiatives to require banks originating structured vehicles to have more skin-in-the-game, or in other words, to hold a relatively risky claim on the vehicles they originate. By forcing banks to have more of their capital at stake when structuring a vehicle, both investors and regulators expect that banks would have incentives to perform due diligence on the quality of the collateral assets. In addition, to avoid regulatory arbitrage, the total capital held against all claims or tranches in a securitized product should not be less than the capital that would be held against the collateral assets [BCBS (2010)]. Abstracting from issues related to executive compensation and the shorthorizon of managers, the copula capital structure model suggests that more skin-in-the-game measures concentrated in junior claims on structured products are bound to fail in offseting the correlation bias.11 The copula pricing model suggests that a bank holding the most junior, subordinated claim in a structured product has a strong incentive to include as many highly correlated assets as possible, as shown in Figure 4.

What may work against correlation bias risk


The evaluation of the regulatory initiatives, especially Basel III, suggests that they would be rather ineffective for reducing the correlation bias of shareholders in financial institutions. The finding should not be surprising since until now, the correlation bias has not been identified as a potential source of idiosyncratic or systemic risk. The natural question for regulators is what measures could work towards reducing the correlation bias of shareholders.

11 On incentives associated with executive compensation see Acharya and Richardson (2009), Part III, and French et al. (2010), Chapter 6. 12 U.S. SEC (2010). The investment bank settled the charge by paying a $550 million fine.

55

Reduce leverage
The simpler solution, probably, is to require financial institutions to hold higher levels of common equity relative to unweighted assets rather than risk-weighted assets. The current Basel III requirements specify a maximum leverage ratio of 33 for Tier-1 capital, so that the leverage ratio of common equity could exceed that number under certain circumstances. Even if Tier-1 were to comprise only common equity, that recommended leverage ratio implies a very thin equity layer supporting the capital structure. Calibrating the maximum common equity leverage ratio would first require setting the acceptable level of risk for a financial institution and evaluating precisely the impact of the correlation bias on the risk of the institution. Alternatively, a rough calibration exercise under simulated conditions could yield rough estimates of common equity leverage ratios deemed safe based on historical experience.

product, or in other words they have to put skin, flesh, and bones in the game.

Enhance corporate control by debt holders


Moving beyond potential solutions related with the regulatory framework, reducing correlation bias risk may require introducing changes in the corporate governance structure of financial institutions to yield more corporate control to debt holders. The incentives for debt holders to participate actively in the corporate control of a bank are relatively weak. Banks debt-like instruments are mainly of a short-term nature, held by a diversified investor base, and in the case of deposits, they are guaranteed by the government. To shift incentives, banks could be required to increase the share of long-term debt in their liabilities, and long-term creditors should be represented in the board of directors. An added benefit of increasing the share of long-term debt is the enhancement of the banks asset-liability management. Going into this direction would require a careful design of the the balance of power between directors representing shareholders and those representing debt holders. The copula capital structure model could provide guidance towards this end.

Enforce the Volcker Rule and portfolio diversification requirements


Another solution is to require financial institutions to hold diversified banking and trading portfolios. Since trading portfolios can be relatively complex and the reality of trading implies frequent portfolio changes, monitoring the portfolios degree of asset correlation is extremely difficult, both for supervisory agencies and the institution itself. Probably, the cleanest way to avoid correlation bias risk is to adopt the Volcker rule and ban systemic financial institutions from proprietary trading activities.13 In the case of the banking portfolio, existing requirements such as concentration and large position limits work towards increasing portfolio diversification. These requirements, however, are only rough guidelines to ensure portfolio diversification since seemingly unrelated sectors may be actually correlated due to their exposure to common risk factors. One way to ensure diversification would be to establish quantitative limits guided by a risk factor analysis performed by the regulatory agency or by the bank using its own internal models. As indicated before, model risk, and in the case of internal models, incentives to game the system could work against this solution.

Conclusions
Viewing the firm as a portfolio of projects and assets funded by claims differentiated by their seniority in the capital structure helps us to extend the insights of the contingent claim model of the capital structure, based on the Black-Scholes-Merton model, to a copula capital structure model based on the copula structured credit pricing model. More importantly, by shifting to a portfolio perspective, it becomes straightforward to identify the correlation bias, which influences the preferences of different claim holders for the degree of asset correlation in the firms portfolios. The more junior the claim is the higher the preference for higher asset correlation. The existence of the correlation bias, combined with the fact that corporate control is exercised by the most junior claimants, suggest that financial institutions including systemic banks may bias their banking and trading portfolios towards highly correlated assets. The portfolio bias could increase the risk that the institution fails, and owing to the high degree of interconnectedness in the financial system, systemic risk would increase since its failure would cause severe market disruptions and raise the likelihood of the subsequent failures.

Force originators to hold skin, flesh, and bones in securitized products


The correlation bias shows that investors in structured and securitized products would prefer that the collateral assets exhibit different levels of correlation depending on the subordination of their claims. In particular, senior tranche holders would prefer low correlation while equity holders would prefer high correlation. To align incentives, originators usually hold the equity tranche and waterfall structures that reduce the maturity of senior tranches, but these measures cannot address the correlation bias of the equity holders. Rather than holding the riskiest tranche, from a default risk perspective, or skin-in-the-game, originators should be required 56 to hold stakes in every single tranche of the structured and securitized

13 The Volcker rule, initially proposed by former U.S. Federal Reserve Chairman Paul Volcker, was incorporated as section 619 in the Dodd-Frank Act approved by the U.S. Congress on June 27, 2010.

The Capco Institute Journal of Financial Transformation


Fat Tails and (Un)happy Endings: Correlation Bias, Systemic Risk and Prudential Regulation

Because the systemic risk implications of the correlation bias have yet to be recognized explicitly, current prudential regulatory reform initiatives including the use of contingent capital and hybrid securities, systemic risk charges, and market practices like waterfall structures in securitized products cannot reduce correlation bias risk and its impact on systemic risk. Measures that could potentially reduce correlation bias risk include increasing substantially the share of common equity in the capital structure of financial institutions, enforcing diversification requirements, banning proprietary trading in systemic institutions, requiring securitization originators to hold stakes in all tranches of the capital structure of securitized products, and giving debt holders some control over the firm.

French, K. R., M. N. Baily, J. Y. Campbell, J. H. Cochrane, D. W. Diamond, D. Duffie, A. K. Kashyap, F. S. Mishkin, R. G. Rajan, D. S. Scharfstein, R. J. Shiller, H. S. Shin, M. J. Slaughter, J. C. Stein, and R. M. Stulz 2010, The Squam Lake report: fixing the financial system, Princeton University Press Gauthier, C., A. Lehar, and M. Souissi, 2010, Macroprudential regulation and systemic capital requirements, Working Paper No. 2010-4, Bank of Canada Gibson, M., 2004, Understanding the risk of synthetic CDOs, Finance and Economics Discussion Series 2004-36, Board of Governors of the Federal Reserve Hartmann, P., S. Straetmans, and C. de Vries, 2004, Asset market linkages in crisis periods, The Review of Economics and Statistics, 86:1, 313326 Hull, J. and A. White, 2004, Valuation of a CDO and n-th to default CDS without Monte Carlo simulation, Journal of Derivatives, 12:2, 8-23 Kashyap, A. K., R. G. Rajan, and J. C. Stein, 2008, Rethinking capital regulation, Proceedings of the 2008 Jackson Hole Symposium, Federal Reserve Bank of Kansas City Lando, D., 2004, Credit risk modeling, Princeton University Press Lester, A., L. H. Pedersen, and T. Philippon, 2008, Systemic risk and macroeconomic capital, working paper, New York University Li, D. X., 2000, On default correlation: a copula function approach, Journal of Fixed Income, 9:4, 4354 Merton, R. C., 1974, On the pricing of corporate debt: the risk structure of interest rates, Journal of Finance, 29:2, 449470 Nirei, M., 2006, Herd behavior and fat tails in financial markets, working paper, Carleton University Rajan, A., G. McDermott, and R. Roy, 2007, The structured credit handbook, Wiley Shin, H.-S., 2008, Risk and liquidity, Clarendon lectures in finance, Oxford University Press Scharfstein, D. and J. Stein, 1990, Herd behavior and investment, American Economic Review, 80, 465479 Strongin, S., A. Hindlian, and S. Lawson, 2009, Effective regulation: part 5 ending too big to fail, Global Markets Institute, Goldman Sachs, December Tarashev, N., C. Borio, and C. Tsatsaronis, 2010, Attributing systemic risk to individual institutions, Working Paper Tarullo, D., 2009, Confronting too big to fail, speech at The Exchequer Club, Washington, D.C., October Taleb, N., 2009, The black swan, Basic Books Triana, P., 2010, Basel III contains seeds of more chaos, Financial Times, September 19 Tucker, P., 2009, The crisis management menu, speech at the SUERF, CEPS, and Belgian Financial Forum Conference, Crisis management at the cross-roads, Brussels, November U.S. Securities and Exchange Commision, 2010, SEC charges Goldman Sachs with fraud in structuring and marketing of CDO tied to subprimer mortgages, press release, April 16 Vacisek, O., 1977, Probability of loss on loan portfolio, Moodys KMV, San Francisco

References

Acharya, V. V., 2001, A theory of systemic risk and design of prudential bank regulation, working paper, New York University Acharya, V. V. and T. Yorulmazer, 2007, Too many to fail an analysis of time-inconsistency in bank closure policies, Journal of Financial Intermediation, 16:1, 131 Acharya, V. V. and M. Richardson, editors, 2009, Restoring financial stability: how to repair a failed system, John Wiley and Sons Adrian, T. and M. Brunnermeier, 2009, CoVaR, working paper, Federal Reserve Bank of New York Allen, F. and D. Gale, 2000, Comparing financial systems, MIT Press Andersen, L., J. Sidenius, and S. Basu, 2003, All your hedges in one basket, Risk, November, 6772 Basel Committee on Banking Supervision, 2009, Strengthening the resilience of the banking sector, Consultative Document, Bank for International Settlements Basel Committee on Banking Supervision, 2010a, Proposal to ensure the loss absorbency of regulatory capital at the point of non-viability, Consultative Document, Bank for International Settlements Basel Committee on Banking Supervision, 2010b, Group of governors and heads of supervision announces minimum global standards, press release, September 12 Bikhchandani, S. and S. Sharma, 2001, Herd behavior in financial markets, IMF Staff Papers, 47, 279310 Black, F. and M. Scholes, 1974, The pricing of options and corporate liabilities, Journal of Political Economy, 81:3, 63754 Borio, C., C. Furfine, and P. Lowe, 2001, Procyclicality of the financial system and financial stability: issues and policy options, BIS Paper No. 1, Bank for International Settlements Brunnermeier, K. and L. Pedersen, 2008, Market liquidity and funding liquidity, Review of Financial Studies, 22:6, 2201-2238 Brunnermeier, K., A. Crockett, C. Goodhart, A. D. Persaud, and H-S. Shin, 2009, The fundamental principles of financial regulation, Geneva reports on the World Economy, Vol. 11, International Center for Monetary and Banking Studies Caruana, J., 2010, Basel III: towards a safer financial system, speech delivered at the Third Santander International Banking Conference, Madrid, September Chan-Lau, J.A., 2001, The impact of corporate governance structures on the agency cost of debt, available at http://ssrn.com/author=96623 Chan-Lau, J.A., 2009, Default risk codependence in the global financial system: was the Bear Stearns bailout justified? in Gregoriou, G., (ed.), The banking crisis handbook, CRC Press Chan-Lau, J.A., 2010, Regulatory capital for too-connected-to-fail institutions: a practical proposal, forthcoming in Financial Markets, Institutions, and Instruments, 19:5, 355379 Cont, R. and J. P. Bouchaud, 1998, Herd behavior and aggregation fluctuations in financial markets, working paper Corrigan, G., 2009, Containing too big to fail, The Charles F. Dolan Lecture Series at Fairfield University November Coval, J., J. W. Jurek, and E. Stafford, 2009, The economics of structured finance, Journal of Economic Perspectives, 23:1, 325 Dewatripont, M., J.-C. Rochet, and J. Tirole, 2010, Balancing the banks, Princeton University Press Dudley, W., 2009, Some lessons learned from the crisis, speech delivered at the Institute of International Bankers Membership Luncheon, New York City, October The Economist, 2010, Third times the charm? September 13 Forbes, K. and R. Rigobon, 2002, No contagion, only interdependence: measuring stock market comovements, Journal of Finance, 57:5, 22232261

57

PART 1

Empirical Analysis, Trading Strategies, and Risk Models for Defaulted Debt Securities
Michael Jacobs, Jr. Senior Financial Economist, Credit Risk Analysis Division, Office of the
Comptroller of the Currency1

Abstract
This study empirically analyzes the historical performance of defaulted debt from Moodys Ultimate Recovery Database (1987-2010). Motivated by a stylized structural model of credit risk with systematic recovery risk, we argue and find evidence that returns on defaulted debt covary with determinants of the market risk premium, firm specific and structural factors. Defaulted debt returns in our sample are observed to be increasing in collateral quality or debt cushion of the issue. Returns are also increasing for issuers having superior ratings at origination, more leverage at default, higher cumulative abnormal returns on equity prior to default, or greater market implied loss severity at default. Considering systematic factors, returns on defaulted debt are positively related to equity market indices and industry default rates.
1 The views expressed herein are those of the author and do not necessarily represent a position taken by the Office of the Comptroller of the Currency or the U.S. Department of the Treasury.

On the other hand, defaulted debt returns decrease with short-term interest rates. In a rolling out-of-time and outof-sample resampling experiment we show that our leading model exhibits superior performance. We also document the economic significance of these results through excess abnormal returns, implementing a hypothetical trading strategy, of around 5-6 percent (2-3 percent) assuming zero (1bp per month) round-trip transaction costs. These results are of practical relevance to investors and risk managers in this segment of the fixed income market.

59

There exists an economic argument that to the extent there may be opportunity costs associated with holding defaulted debt, and that the performance of such debt may vary systematically, the required return on the defaulted instruments should include an appropriate risk premium. Thus far, most research studying systematic variation in defaulted debt recoveries has focused on the influence of either macroeconomic factors [Frye (2000 a,b,c; 2003), Hu and Perraudin (2002) Cary and Gordy (2007), Jacobs (2011)], supply/demand conditions in the defaulted debt markets [Altman et al. (2003)], or some combination thereof [Jacobs and Karagozoglu, (2011)]. Probably the reason for this focus is the conventional wisdom that determinants of recoveries (i.e., collateral values) are thought covary with such systematic macroeconomic measures. However, the results concerning systematic variation in recoveries have been mixed. We believe that this is due to the unmeasured factors influencing the market risk premium for defaulted debt. Adequately controlling for other determinants of defaulted debt performance, potentially imperfectly correlated with standard macroeconomic indicators, is critical to understanding this. We propose to extend this literature in several ways. First, we quantify the systematic variation in defaulted debt returns with respect to factors which influence the market risk premium for defaulted debt, which are related to investors risk aversion or investment opportunity sets; in the process, we specify a simple stylized model of credit risk in structural framework [Merton (1974)], having testable implications that are investigated herein. Second, we are able to analyze defaulted debt performance in segments homogenous with respect to recovery risk, through controlling for both firm and instrument specific covariates, and examine whether these are associated with recoveries on defaulted debt securities. Third, departing from most of the prior literature on recoveries, having predominantly focused on measures around the time of default or at settlement, we will be studying the relationship amongst these in the form of returns. We believe that such focus is most relevant to market participants both for traders and buy-and-hold investors (i.e., vulture funds, or financial institutions managing defaulted portfolios) since this is an accepted measure of economic gain or loss. Finally, we are able to build parsimonious and robust econometric models, in the generalized linear model (GLM) class, that are capable of explaining and predicting defaulted debt returns, and we use these to construct trading strategies demonstrating their economic significance. In this study, we quantify the performance of defaulted debt relative to the previously and newly proposed determinants of corporate debt recoveries, through a comprehensive analysis of the returns on this asset class. The dataset that we utilize, Moodys Ultimate Recovery Database (MURD), contains the market prices of defaulted bonds and loans near the time of default, and the prices of these instruments (or market value 60 of the bundle of instruments) received in settlement (or at the resolution)

of default. We have such data for 550 obligors and 1368 bonds and loans in the period 1987-2010. We examine the distributional properties of the individual annualized rates of return on defaulted debt across different segmentations in the dataset (i.e., default type, facility type, time period, seniority, collateral, original rating, industry), build econometric models to explain observed returns, and quantify potential trading gains to deploying such models. Our principle results are as follows. We find returns to be in line with (albeit to the upper end of the range of results) what has been found in the previous literature, a mean of 28.6 percent.3 We find returns on defaulted debt to vary significantly according to contractual, obligor, equity/ debt markets, and economic factors. At the facility structure level, there is some evidence that returns are elevated for defaulted debt having better collateral quality rank or better protected tranches within the capital structure. At the obligor or firm level, returns are elevated for obligors rated higher at origination, more financially levered at default, or having higher cumulative abnormal returns (CARs) on equity prior to default. However, we also find returns to be increasing in the market implied loss severity at default. We also find evidence that while defaulted debt returns vary counter to the credit cycle, as they increase with industry default rates, they also increase with aggregate equity market returns. Further, we observe that short-term interest rates are inversely related to returns on defaulted debt. Finally, we document the economic significance of these results through excess abnormal returns, in a debt-equity arbitrage trading experiment, of around 5-6 percent (2-3 percent) assuming zero (1bp per month) round-trip transaction costs. In addition to the relevance of this research for resolving questions in the finance of distressed debt investing, and aiding practitioners in this space, our results have implications for recently implemented supervisory Basel II capital standards for financial institutions [BCBS (2004)]. Our results indicate that time variation in the market risk premium for defaulted debt may be an important systematic factor influencing recoveries on such instruments (and by implication, their loss-given-default LGD), which is likely to not be perfectly correlated with the business cycle. Hence, any financial institution, in making the decision about how much capital to hold as a safeguard against losses on corporate debt securities, should take into account factors such as the systematic variation in investor risk aversion and

Standard portfolio separation theory implies that, all else equal, during episodes of augmented investor risk aversion, a greater proportion of wealth is allocated to riskfree assets [Tobin (1958), Merton (1971)], implying lessened demand, lower price, and augmented expected returns across all risky assets. The probable reason why we are closer to the higher end of estimates, such as Keenan et al (2000), is that we have included several downturn periods, such as the early 1990s and recently.

The Capco Institute Journal of Financial Transformation


Empirical Analysis, Trading Strategies, and Risk Models for Defaulted Debt Securities

investment opportunity sets.4 Indeed, Basel II requires that banks quantify downturn effects in LGD estimation [BCBS (2005, 2006)], and for the relevant kind of portfolio (i.e., large corporate borrowers having marketable debt), and our research provides some guidance in this regard.

event of default, that varies across issues according to both borrower and instrument characteristics. Upon default such expectations become one and the same for issues of the same ranking. There is cross-sectional variation in yields is due to varied perceived default risk as well as instrument structures, but as default approaches the claim on the debt collapses to a common claim on the expected share of emergence value of the firms assets due to the creditor class. Consequently, the contract rate on the debt pre-default is no longer the relevant valuation metric with respect to restructured assets. This was predicted by the Merton (1974) theoretical framework that credit spreads on a firms debt approach the expected rate of return on the firms assets, as leverage increases to the point when the creditors become the owners of the firm. Schuermann (2003) echoed the implications of this argument by claiming that cash flows post-default represent a new asset. Altman and Jha (2003), regressing the Altman/Solomon Center defaulted bond index on the S&P 500 returns for the period 1986-2002, come up with an 11.1 percent required return (based upon a 20.3 percent correlation estimate.) Altman et al. (2003) examine the determinants of recoveries on defaulted bonds, in a setting of systematic variation in aggregate recovery risk, based on market values of defaulted debt securities shortly following default. The authors find that the aggregate supply of defaulted debt securities, which tends to increase in downturn periods, is a key determinant of aggregate as well as instrument level recovery rates. The authors results suggest that while systematic macroeconomic performance may be associated with elevated LGD, the principle mechanism by which this operates is through supply and demand conditions in the distressed debt markets. More recently, Altman (2010) reports that the Altman-NYU Salomon Center Index of defaulted bonds (bank loans) returned 12.6 percent (3.4 percent) over the period 1986-2009 (1989-2009). Machlachlan (2004), in the context of proposing an appropriate discount

Review of the related literature


Altman (1989) develops a methodology at the time new to finance for the measurement of risk due to default, suggesting a means of ranking fixed-income performance over a range of credit-quality segments. This technique measures the expected mortality of bonds, and associated loss rates, similarly to actuarial tabulations that assess human mortality risk. Results demonstrate outperformance by risky bonds relative to riskless Treasuries over a ten-year horizon and that, despite relatively high mortality rates, B-rated and CCC-rated securities outperform all other rating categories in the first four years after issuance, with BB-rated securities outperforming all others thereafter. Gilson (1995) surveys the market practices of so-called vulture investors, noting that as the risks of such an investment style exposes one to a high level of idiosyncratic and non-diversifiable risk, those who succeed in this space must have a mastery of legal rules and institutional setting that govern corporate bankruptcy. The author further argues that such mastery can result in very high returns. Hotchkiss and Mooradian (1997) study the function of this investor class in the governance and reorganization of defaulted firms using a sample of 288 public debt defaults. They attribute better relative operating performance after default to vulture investors gaining control of the target firm in either a senior executive or an ownership role. They also find positive abnormal returns for the defaulted firms equity or debt in the two days surrounding the public revelation of a vulture purchase of such instruments. The authors conclude that vulture investors add value by disciplining managers of distressed firms. The historical performance of the Moodys Corporate Bond index [Keenan et al. (2000)] shows an annualized return of 17.4 percent in the period 1982-2000. However, this return has been extremely volatile, as most of this gain (147 percent) occurred in the period 1992-1996. Keenan et al. (2000) and Altman and Jha (2003) both arrive at estimates of a correlation to the market on this defaulted loan index of about 20 percent, implying a market risk premium of 216 bps. Davydenko and Strebuleav (2002) report similar results for non-defaulted high-yield corporate bonds (BB rated) in the period 1994-1999. From the perspective of viewing defaulted debt as an asset class, Guha (2003) documents a convergence in market value as a proportion of par with respect to bonds of equal priority in bankruptcy approaching default. This holds regardless of contractual features, such as contractual rate or remaining time-to-maturity. The implication is that while prior to default bonds are valued under uncertain timing of and recovery in the

rate for workout recoveries for regulatory purposes in estimating economic LGD [BCBS (2005)], outlines a framework that is motivated by a single factor CAPM model and obtains similar results in two empirical exercises. First, regressing Altman-NYU Salomon Center Index of Defaulted Public Bonds in the period 1987-2002 on the S&P 500 equity index, he obtains a 20 percent correlation, implying a market risk premium (MRP) of 216 bps. Second, he looks at monthly secondary market bid quotes for the period April 2002-August 2003, obtaining a beta estimate of 0.37, which according to the Frye (2000c) extension of the Basel single factor framework, implies a recovery value correlation of 0.21 and an MRP of 224 bps.

Our research also has a bearing on the related and timely issue of the debate about the so-called pro-cyclicality of the Basel capital framework [Gordy ( 2003)], an especially relevant topic in the wake of the recent financial crisis, where a critique of the regulation is such that banks wind up setting aside more capital just at the time that they should be using capital to provide more credit to businesses or to increase their own liquidity positions, in order to help avoid further financial dislocations and help revitalize the economy.

61

Finally, considering studies of recovery rates (orLGDs), Acharya et al. (2007) examine the empirical determinants of ultimate LGD at the instrument level, and find that the relationship between the aggregate supply of defaulted debt securities and recoveries does not hold after controlling for industry level distress. They argue for a fire-sale effect that results when most firms in a troubled industry may be selling collateral at the same time. These authors results imply that systematic macroeconomic performance may not be a sole or critical determinant of recovery rates on defaulted corporate debt. Carey and Gordy (2007) examine whether there is systematic variation in ultimate recoveries at the obligor (firmlevel default incidence) level, and find only weak evidence of systematic variation in recoveries. Recently, building upon these two studies, Jacobs and Karagozoglu (2011) empirically investigate the determinants of LGD and build alternative predictive econometric models for LGD on bonds and loans using an extensive sample of most major U.S. defaults in the period 19852008. They build a simultaneous equation model in the beta-link generalized linear model (BLGLM) class, identifying several that perform well in terms of the quality of estimated parameters as well as overall model performance metrics. This extends prior work by modeling LGD both at the firm and the instrument levels. In a departure from the extant literature, the authors find the economic and statistical significance of firm-specific, debt, and equity-market variables; in particular, that information from either the equity or the debt markets at around the time of default (measures of either distress debt prices or cumulative equity returns, respectively) have predictive powers with respect to the ultimate LGD, which is in line with recent recovery and asset pricing research. They also document a new finding, that larger firms (loans) have significantly lower (higher) LGDs.

Defining the recovery rate on the ith defaulted asset8 at time t as Ri,t, we may similarly write the stochastic process describing its evolution as: dRi,t/Ri,t = iRdt + iRdWR (4), where iR is the drift (which can be taken to i,t be the expected instantaneous return on collateral under physical meaof the collateral return, and WR is a standard Weiner process that for rei,t
R

sure, or the risk-free rate under risk-neutral measure), iR is the volatility

covery segment SR decomposes as: dWR = i,sR dXt + (1 2i,sR)1/2 dZR i,t i,t (5), where the two-systematic factors are bivariate standard normal, each standard normal, but with correlation r between each other:

This set-up follows various extensions of the structural model framework for systematic recovery risk. What they have in common is that they allow the recovery process to depend upon a second systematic factor, which may be correlated with the macro (or market) factor Xt [Frye (2000 a, b, c), Pyktin (2003), Dullman and Trapp (2004), Giese (2005), Rosch and Scheule (2005), Hillebrand (2006), Barco (2007), Jacobs (2011)]. In this general and more realistic framework, returns on defaulted debt may be governed by a stochastic process distinct from that of the firm. This is the case where the asset is secured by cash, third party guarantees, or assets not used in production. In this setting, it is possible that there are two salient notions of asset value correlation, one driving the correlation amongst defaults, and another driving the correlation between collateral values and the returns on defaulted assets in equilibrium. This reasoning implies that it is entirely conceivable that, especially in complex banking facilities, cash flows associated with different sources of repayment should be discounted differentially according to their level of systematic risk. In not distinguishing how betas may differ between defaulted instruments secured differently, it is quite probable that investors in distressed debt may misprice such assets. It is common to assume that the factor loading in (5) is constant amongst debt instruments within specified recovery segments, so that the recovery-value correlation for segment SR is given by 2i,sR RsR.9 If we take the further step of identifying this correlation with the correlation to a market portfolio arguably a reasonable interpretation in the asymptotic

( dX

0 1 r T , dX tR ) ~ N , 0 r 1

(6)

Theoretical framework
In this section we lay out the theoretical basis for returns on post-default
D recoveries, denoted rs , where s denotes a recovery segment (i.e., senior-

ity classes, collateral types, etc.). Following an intertemporal version of the structural modelling framework for credit risk [(Merton (1971), Vasicek (1987, 2002)5], we may write the stochastic process describing the instantaneous evolution of the ith firms5 asset returns at time t as: dVi,t/Vi,t = idt + iWi,t (1), where Vi,t is the asset value, i is the return volatility, i is the drift (which can be taken to be the risk-free rate r under risk-neutral measure), and Wi,t is a standard Weiner process that decomposes as (this is also known as a standardized asset return): dWi,t = i,XdXt + (1- 2i,X)1/2 dZi,t (2), where the processes (also standard Weiners) Xt and Zi,t are the systematic risk factors (or standardized asset returns) and the idiosyncratic (or firm-specific) risk factors, respectively; and the factor loading i,X is constant across all firms in a PD segment homogenous with respect to default risk (or across time for the representative firm).7 It follows that the instantaneous asset-value correlation amongst firms (or segments) i and j is given by: 1/dt CorV [dVi,t/Vi,t , dVj,t/Vj,t] = i,Xj,X (3). i,j 62

5 6 7

8 9

Note that this is also the approach underlying the regulatory capital formulae [BCBS (2003)], as developed by Gordy (2003). This could also be interpreted as the ith PD segment or an obligor rating. Vasicek (2002) demonstrates that under the assumption of a single systematic factor, an infinitely granular credit portfolio, and LGD that does not vary systematically, a closed-form solution for capital exists that is invariant to portfolio composition. We can interpret this as an LGD segment (or rating) or debt seniority class. Indeed, for many asset classes the Basel II framework mandates constant correlation parameters equally across all banks, regardless of particular portfolio exposure to industry or geography. However, for certain exposures, such as wholesale non-high volatility commercial real estate, this is allowed to depend upon the PD for the segment or rating [BCBS (2004]).

The Capco Institute Journal of Financial Transformation


Empirical Analysis, Trading Strategies, and Risk Models for Defaulted Debt Securities

single risk-factor (ASRF) framework [(Vasicek (1987), Gordy (2003)] then we can write R2
sR

sR . ,M

It then follows from the standard capital asset

pricing model (CAPM) that the relationship between the defaulted debt instrument and market rates of return is given by the beta coefficient:

dR dV Covs R , M i ,t , M ,t iR Rs R Ri ,t VM ,t = = R s ,M M dV VarM M ,t VM ,t

E Pi , j ,t E 1 D rj = D D i , j N j i =1 Pi , j ,t D i, j
ND j

1 tiE, j tiD, j 1

(10)

Where N j is the number of defaulted loans in the recovery group j. A measure of the recovery uncertainty in recovery class s is given by the sample standard error of the mean annualized return:

(7)

Where M is volatility of the market return. We may now conclude that in

this setting the return on defaulted debt on the sth exposure (or segment)
D rs is equal to the expected return on the collateral, which is given by the

sum of the risk-free rate rrf and a debt-specific risk-premium s:

r = rrf +

D s

iR Rs M

(r

rrf ) = rrf + s R , M MRP = rrf + s R

sr D
j

E Pi , j ,t E 1 = D P D iD, j N j 1 i =1 i , j ,t i, j
ND j

1 tiE, j tiD, j D 1 rj

(11)

Empirical results: summary statistics of returns on defaulted debt by segment


(8) In this section and the following, we document our empirical results. These are based upon our analysis of defaulted bonds and loans in the Moodys Ultimate Recovery Database (MURD) release as of August, 2010. This contains the market values of defaulted instruments at or near the time of default,11 as well as the values of such pre-petition instruments (or of instruments received in settlement) at the time of default resolution. This database is largely representative of the U.S. large-corporate loss experience, from the late 1980s to the present, including most of the major corporate bankruptcies occurring in this period. Table A1, in the Appendix, summarizes basic characteristics of simple annualized return on defaulted debt (RDD) in (10) by default event type (bankruptcy under Chapter 11 versus out-of-court settlement) and instrument type (loans broken down by term and revolving versus bonds). Here we also show the means and standard deviations of two other key quantities: the time-to-resolution (i.e., time from default to time of resolution) and the outstanding-at-default, for both the RDD sample as well as for the entire MURD database (i.e., including instruments not having trading prices at default). We conclude from this that our sample is for the most part representative of the broader database. Across all instruments, average time-to-resolution is 1.6 (1.4) years and average outstanding at default is U.S.$216.4M (U.S.$151.7M) for the analysis (broader) samples.

Where the market risk premium is given by MRP rM rrf (also assumed to be constant through time) and the debt-specific risk premium is given by sR =
sR ,M

MRP. This approach identifies the systematic factor with the

standardized return on a market portfolio rM, from which it follows that the asset correlation to the former can be interpreted as a normalized beta in a single factor CAPM (or just a correlation between the defaulted debts and the markets return), which is given by i,sR (Rs
R)1/2.

In subsequent sections, we pursue alternative estimations i,sR, through regressing actual defaulted debt returns on some kind of market factor or other measure of systematic risk (i.e., aggregate default rates),10 while controlling for firm or instrument specific covariates.

Empirical methodology
We adopt a simple measure, motivated in part by the availability of a rich dataset of defaulted bonds and loans available to us, which analyzes the observable market prices of debt at two points in time: the default event (i.e., bankruptcy or other financial distress qualifying as a default) and the resolution of the default event (i.e., emergence from bankruptcy under Chapter 11 or liquidation under Chapter 7). We calculate the annualized rate of return on the ith defaulted debt instrument in segment j as:

D where P, j ,tiDj ( P, j ,tiE, j ) are the prices of debt at time of default t i,j (emeri i , E gence t i,j ). An estimate for the return, the jth segment (seniority class of
D E

P ri ,Dj = D Pi , j ,t D i, j

E i , j ,tiE j ,

tiE j tiDj , ,

1
(9)
10 Alternatively, we can estimate the vector of parameters (, iR, i,s, i,sR, r)T by fullinformation maximum likelihood (FIML), given a time series of default rates and realized recovery rates. The resulting estimate i,sR can be used in equation (8) in conjunction with estimates of the market volatility M, debt-specific volatility iR, the MRP (rM rf), and the risk-free rate rf in order to derive the theoretical return on defaulted debt within this model [Machlachlan (2004)]. Also see Jacobs [2011] for how these quantities can be estimated from prices of defaulted debt at default and at emergence of different seniority instruments. 11 Experts at Moody compute an average of trading prices from 30 to 45 days following the default event, where each daily observation is the mean price polled from a set of dealers with the minimum/maximum quote thrown out.

collateral type), can then be formed as the arithmetic average across the loans in that segment:

63

Quintiles of time from default to resolution date 1 Average (%) Quintiles of time from last cash pay to default date 1 2 3 4 5 Total 64.19 22.10 20.81 91.53 92.08 58.90 Std error of the mean (%) 24.57 15.41 12.16 31.75 34.68 11.57 Average (%) 25.75 38.34 30.55 41.38 57.99 39.71 2 Std error of the mean (%) 26.11 17.09 18.16 19.92 20.85 8.89 Average (%) 38.32 28.24 10.04 19.79 8.82 20.02 3 Std error of the mean (%) 13.54 19.03 8.12 9.16 8.21 5.71 Average (%) 29.75 26.69 27.19 23.55 34.22 27.32 4 Std error of the mean (%) 14.08 9.21 11.21 6.26 20.16 4.99 Average (%) -4.99 8.23 8.90 8.96 -2.97 6.03 5 Std error of the mean (%) 9.21 6.82 5.06 3.91 8.57 2.61 Average (%) 35.04 25.93 19.28 28.67 38.32 28.56 Total Std error of the mean (%) 9.66 6.78 5.26 5.51 9.23 3.11

Table 1 Returns on defaulted debt (RDD)1 of defaulted instruments by quintiles of time-to-resolution (TTR)2 and time-in-distress (TID)3 from last cash pay to default date (Moodys Ultimate Recovery Database 1987-2010)
1 Annualized return on defaulted debt from just after the time of default (first trading date of debt) until the time of ultimate resolution. 2 TTR: Duration in years from the date of default (bankruptcy filing or other default) to the date of resolution (emergence from bankruptcy or other settlement). 3 TID: Duration in years from the date of the last interest payment to the date of default (bankruptcy filing or other default).

The bottom panel of Table A1 represents the entire Moodys database, whereas the top panel summarizes the subset for which we can calculate RDD measures. The version of MURD that we use contains 4,050 defaulted instruments, 3,500 (or 86.4 percent) of which are bankruptcies, and the remaining 550 are distressed restructurings. On the other hand, in the RDD subset, the vast majority (94.6 percent or 1,322) of the total (1,398) are Chapter 11. One reason for this is that the times-to-resolution of the out-of-court settlements are so short (about 2 months on average) that it is more likely that post-default trading prices at 30-45 days from default are not available. Second, many of these were extreme values of RDD, and were heavily represented in the outliers that we chose to exclude from the analysis (30 of 35 statistical outliers).
12

Table A2 summarizes the distributional properties of RDD by seniority rankings (bank loans; senior secured, unsecured and subordinated bonds; and junior subordinated bonds) and collateral types.13 Generally, since this does not hold monotonically across collateral classes or is consistent across recovery risk measures, better secured or higher ranked instruments exhibit superior post-default return performance. However, while the standard error of mean RDD (which we can argue reflects recovery uncertainty) tends to be lower for more senior instruments, it tends to be higher for those which are better secured. Average RDD is significantly higher for secured as compared to unsecured facilities, 34.5 percent versus 23.6 percent respectively. Focusing on bank loans, we see a wider split of 33.0 percent versus 19.8 percent for secured and unsecured, respectively. However, by broad measures of seniority ranking, mean RDD exhibits a non-monotonic increasing pattern in seniority, while the standard error of RDD is decreasing in seniority. Average RDD is 32.3 percent and 36.6 percent for loans and senior secured bonds, as compared to 23.7 percent and 33.2 percent for senior secured and senior subordinated bonds, decreasing to 15.6 percent for junior subordinated instruments. However, while unsecured loans have lower post-default

The overall average of the 1,398 annualized RDDs is 28.6 percent, with a standard error of the mean of 3.1 percent, and ranging widely from -100 percent to 893.8 percent. This suggests that there were some very high returns as the 95th percentile of the RDD distributions is 191 percent, or that in well over 70 cases investors would have more than doubled their money holding defaulted debt. We can observe this in Figure 1, the distribution of RDD, which has an extremely long tail to the right. We observe that the distribution of RDD is somewhat different in the case of out-of-court settlements as compared to bankruptcies, with respective mean RDDs of 37.3 percent for the former, and 28.1 percent in the latter. The standard errors of mean RDDs are also much higher in the non-bankruptcy population, 15.3 percent for out-of-court versus 3.2 percent for bankruptcies. The data is well-represented by bank loans, 36.8 percent (38.1 percent) of the RDD total MURD sample, or 514 (1543) out of 1398 (4050) instruments. Loans appear to behave somewhat differently than bonds, having slightly higher mean and standard error of mean RDDs, 32.1 percent and 26.4 percent, respectively. 64

12 Based upon extensive data analysis in the Robust Statistics package of the S-Plus statistical computing application, we determined 35 observations to be statistical outliers. The optimal cutoff was determined to be about 1,000%, above which we removed the observation from subsequent calculations. There was a clear separation in the distributions, as the minimum RDD in the outlier subset is about 17,000%, more than double the maximum in the non-outlier subset. 13 We have two sets of collateral types: the 19 lowest level labels appearing in MURD (Guarantees, Oil and Gas Properties, Inventory and Accounts Receivable, Accounts Receivable, Cash, Inventory, Most Assets, Equipment, All Assets, Real Estate, All Noncurrent Assets, Capital Stock, PP&E, Second Lien, Other, Unsecured, Third Lien, Intellectual Property and Intercompany Debt), and a six level high level grouping of that we constructed from the (Cash, Accounts Receivables & Guarantees; Inventory, Most Assets & Equipment; All Assets & Real Estate; Non-Current Assets & Capital Stock; PP&E & Second Lien; and Unsecured & Other Illiquid Collateral). The latter high-level groupings were developed with in consultation with recovery analysis experts at Moodys Investors Services.

The Capco Institute Journal of Financial Transformation


Empirical Analysis, Trading Strategies, and Risk Models for Defaulted Debt Securities

Count

Average of RDD 22.94% 45.09% 17.92% 31.57% 21.99% 29.77% 23.71% 28.56%

Standard error of mean RDD 5.04% 13.25% 5.24% 5.66% 8.29% 8.43% 5.94% 3.34% Debt tranche groups 1st quintile TSI 2nd quintile TSI 3rd quintile TSI 4th quintile TSI 5th quintile TSI NDA / SDB1 SDA / SDB2

Count

Average of RDD 35.06% 10.98% 25.77% 42.41% 47.48% 42.77% 24.06% 25.23% 19.67% 28.56%

Standard error of mean RDD 12.88% 4.85% 5.40% 6.33% 9.57% 4.89% 7.65% 9.44% 5.25% 3.11%

Rating groups

AA-A BBB BB B CC-CCC Investment grade (BBB-A) Junk grade (CC-BB) Total

146 586 285 65 125 211 996 1398

172 373 413 342 98 449 259 164 526 1398

NDA / NDB3 NDB / SDA Total


4

Table 2 Returns on defaulted debt1 of defaulted instruments by credit rating at origination (Moodys Ultimate Recovery Database 1987-2010)
1 Annualized Return on defaulted debt (RDD) from just after the time of default (first trading date of debt) until the time of ultimate resolution.

1 No debt above and some debt below. 2 Some debt above and some debt below. 3 No debt above and no debt below. 4 No debt below and some debt above.

returns than secured loans, within the secured loan class we find that returns exhibit a humped pattern as collateral quality goes down in rank, an increase in RDD from 22.6 percent for cash, to 46.2 percent for All assets and real estate, to 29.0 percent for PP&E and second lien. Table 1 summarizes RDDs by two duration measures: the time-in-distress (TID), defined as the time (in years) from the last cash pay date to the default state, and the time-to-resolution (TTR), the duration from the date of default to the resolution or settlement date. Analysis of these measures helps us to understand the term-structure of the defaulted debt returns. We examine features of RDD by quintiles of the TTR and TID distributions, where the first refers to the bottom fifth of durations in length, and the fifth quintile the top longest. The patterns we observe are that RDD is decreasing (albeit non-monotonically) in TTR, while it exhibits a U-shape in TID. Table 2 summarizes RDD by the earliest available Moodys senior unsecured credit rating for the obligor. This provides some evidence that returns on defaulted debt are augmented for defaulted obligors that had, at origination (or time of first public rating), better credit ratings or higher credit quality. Mean RDD generally declines as credit ratings worsen, albeit unevenly. While the average is 22.9 percent for the AA-A category, it goes up to 45.1 percent for BBB, then down to 17.9 percent for BB, but up again to 31.6 percent for B, and finally down to 21.99 percent for the lowest category CC-CCC. Table 3 summarizes RDD by measures of the relative debt cushion of the defaulted instrument. MURDTM provides the proportion of debt either above (degree of subordination) or below (debt cushion) any defaulted instrument, according to the seniority rank of the class to which the instrument belongs. It has been shown that the greater the level of debt below,

Table 3 Returns on defaulted debt5 of defaulted instruments by Tranche Safety Index6 (TSI) quintiles and categories (Moodys Ultimate Recovery Database 1987-2010)
5 Annualized return on defaulted debt (RDD) from just after the time of default (first trading date of debt) until the time of ultimate resolution. 6 An index of the tranche safety calculated as TTS = (% debt below % debt above + 1)/2.

or the less debt above, the better the ultimate recovery on the defaulted debt [Keisman et al. (2000)]. We can also think of this position in the capital structure in terms of tranche safety the less debt above, more debt below, then the more likely it is that there will be some recovery. While this is not the entire story, this measure has been demonstrated to be an important determinant of ultimate recovery, so we suspect that it will have some bearing on the performance of defaulted debt. Here, we offer evidence that returns on defaulted debt are increasing in the degree of tranche safety, or relative debt cushion, as measured by the difference between debt below and debt above. To this end, we define the tranche safety index (TSI) as: TSI [%debt below %debt above + 1] (12)

This ranges between zero and 1. When it is near zero the difference between the debt above and below is greatest (i.e., the thinnest tranche or the most subordinated), and closest to unity when debt below is maximized and the debt above is nil (i.e., the thickest tranche or the greatest debt cushion). In Table 3, we examine the quintiles of the TSI, where the bottom 20th percentile of the TSI distribution represents the least protected instruments, and the top 20th percentile the most protected. Additionally, we define several dummy variables in order to capture this phenomenon, as in Brady et al. (2006). No debt above and some debt below (NDA/SDB) represents a group that should be the best protected, while Some debt above and some debt below (SDA/SDB) and No debt above and no debt below (NDA/NDB) represent intermediate groups, and No debt below 65

and some debt above (NDB/SDA) should be the least protected group. Table 3 shows that there is there is U-shape overall in average RDD with respect to quintiles of TSI: starting at 35.1 percent at the bottom quintile, having a minimum in the second of 11.0 percent, and increasing thereafter to 25.8 percent, 42.3 percent and 47.5 percent at the top. With regards to the dummy variables, we observe a general decrease in average RDD, from the most to the least protected categories: 42.8 percent, 24.1 percent, 25.2 percent, and 19.7 percent from NDA/SDB to NDB/SDA.

total assets. Results show generally a negative correlation between cash flow ratios and RDD, notably a strong negative correlation for FAR of 9.0 percent. The intuition here may be considered strained, as it is natural to think that the ability to throw off cash may signal a firm with an underlying business model that is viable, which is conducive to a successful emergence from default and well performing debt; however, this may also be taken to mean an excess of cash with not good investments to apply it to and a basically poor economic position. Finally for the financials, we have a set of variables that measure some notion of accounting profitability: net income/book value of total assets, net income/market value of total assets, retained earnings/book value of total assets, return on assets, and return on equity. These have generally a modest inverse relation to RDD. As with other dimensions of risk considered here, we resort to a backward story, relative to the expectation that leastbad profitability mitigates credit or default risk: that is, if already in default, then better accounting profitability may be a harbinger of deeper woes for the firm, as reflected in the better returns on the debt from default to resolution of default. However, none of these enter the multiple regressions. Equity price performance metrics were extracted from CRSP at the

Summary statistics and distributional properties of covariates


In this section we first analyze the independent variables available to us and calculated from MURD, as well as data attached to this from Compustat and CRSP, and then discuss a multivariate regression model to explain RDD. Table A3 in the Appendix summarizes the distributional properties of key covariates in our database and their univariate correlations to RDD. We have grouped these into the following categories: financial statement and market valuation, equity price performance, capital structure, credit quality/credit market, instrument/contractual, macro/ cyclical, and durations/vintage. The financial variables, alone or in conjunction with equity market metrics, are extracted from Compustat or CRSP. The Compustat variables are taken from the date nearest to the first instrument default of the obligor, but no nearer than one month, and no further than one year, to default. These are shown in the top panel of Table A3 in the Appendix. First, we see some evidence that leverage is positively related to RDD, suggesting that firms that were nearer to their default points prior to the event had defaulted debt that performed better over the resolution period, all else equal. This is according to an accounting measure, book value of total liabilities/book value of total assets, which has a substantial positive correlation of 17.2 percent. Next, we consider a set of variables measuring the degree of market valuation relative to stated value, or alternatively the degree of intangibility in assets: Tobins Q, market value of total assets/book value of total assets (MVTA/BVTA), book value of intangibles/book value of total assets, and the price/earnings ratio. In this group, there is evidence of a positive relationship to the RDD, which is strongest by far for MVTA/BVTA, having a correlation of 18.5 percent. This enters into some of our candidate regression models significantly, but not the final model chosen. We speculate that the intuition here is akin to a growth stock effect such types of firms may have available a greater range of investment options, that when come to fruition result in better performance of the defaulted debt on average. We display 3 covariates in Table A3 that measure the cash-flow generating ability of the entity: free asset ratio (FAR), free cash flow/book 66 value of total assets, and the cash flow from operations/book value of

date nearest to the first default date of the obligor, but no nearer than on month to default. These are shown in the second from top panel of Table A3. The 1-month equity price volatility, the standard deviation of daily equity returns in the month prior to default, exhibits a small modest positive correlation of 2.5 percent to RDD. This sign is explainable by an option theoretic view of recoveries, since the value of a call-option on the residual cash flows of the firms to creditors are expected to increase in asset value volatility, which is reflected to some degree in equity volatility. On the other hand, the one-year expected equity return, defined as the average return on the obligors stock in excess of the risk-free rate the year prior to default, exhibits a modest degree of negative correlation (-6.4 percent) which we find to be somewhat puzzling. The cumulative abnormal returns on equity, the returns in excess of a market model in the 90 days prior to default, have the strongest positive relationship to RDD of the group, 10.3 percent. This is understandable, as the equity markets may have a reasonable forecast of the firms ability to become rehabilitated in the emergence from default, as reflected in less poor stock price performance relative to the market. Note this is one of two variables in this group that enters the candidate regression models, and is also the basis of our trading analysis. Market capitalization of the firm relative to the market as a whole, defined as the logarithm of the scaled market capitalization,14 also has a significant negative univariate correlation to the RDD of -8.6 percent, and enters all of the regressions, as does CAR. We have no clear a priori expectation for this variable, as

14 The scale factor is defined as the market capitalization of the stock exchange where the obligor trades times 10,000.

The Capco Institute Journal of Financial Transformation


Empirical Analysis, Trading Strategies, and Risk Models for Defaulted Debt Securities

perhaps we would expect larger companies to have the resiliency to better navigate financial distress, counter to what we are measuring. The stock price relative to the market, which is the percentile ranking or the absolute level of the stock price in the market, has a moderate negative correlation to RDD of -4.4 percent. As this variable is intended to capture the delisting effect when a stock price goes very low, we might expect the opposite sign on this correlation. Finally, the stock price trading range, defined as the stock price minus its three-year low divided by the difference between its three-year high and three-year low, is showing only a small negative correlation to RDD of -2.9 percent. This is another counterintuitive result, as one might expect that when a stock is doing better than its recent range that it should be a higher quality firm whose debt might have a better performance in default, but the data is not showing that, or much less of any kind of relationship here. Capital structure metrics, extracted from the MURD data at the default date of the obligor, are shown in the third from top panel of Table A3. The two measures of capital structure complexity, number of instruments (NI) and number of creditor classes (NCC), show an inverse relationship to defaulted debt performance. NI (NCC) has a modest negative correlation to RDD of -4.0 percent (-3.0 percent). We might expect a simpler capital structure to be conducive to favorable defaulted debt performance according to a coordination story. Note that neither of these variables enters the final regression models. While most companies in our database have relatively simple capital structures, with NI and NCC having medians of 6 and 2, respectively, there are some rather complex structures (the respective maxima are 80 and 7). We have three variables in this group that measure the nature of debt composition: percent secured debt (PSCD), percent bank debt (PBD) and percent subordinated debt (PSBD). The typical firm in our database has approximately 40 percent to 50 percent of its debt either secured, subordinated, or bank funded. All of these exhibit moderate positive correlation to RDD of 8.8 percent, 9.4 percent, and 8.7 percent for PSCD, PBD, and PSBD, respectively. The result on PBD may be attributed to either a monitoring on the one hand, or alternatively an optimal foreclosure boundary choice, kind of story [Carey and Gordy (2007), Jacobs (2011)]. However, as with the complexity variables, none of these appear in the regression model. The credit quality/credit market metrics were extracted from the MURD database and Compustat just before the default date of the obligor. These are shown in the fourth from top panel of Table A3. Two of the variables in this group have, what may seem to be at first glance, counterintuitive relationships to RDD. First, the Altman Z-Score, which is available in Compustat, has a relatively large negative correlation of -8.8 percent (note that higher values of the Z-score indicate lower bankruptcy risk). Second, the LGD implied by the trading price at default, which forms the basis for the RDD calculation, exhibits a moderate positive correlation to RDD of 11.3

percent. As this variable has been shown to have predictive power for ultimate LGD [Emery et al. (2007), Jacobs and Karagozoglu (2011)], at first glance this relationship may seem difficult to understand. But note that the same research demonstrates that LGD at default is also an upwardly biased estimate of ultimate LGD in some sense. Consequently, we might just as well expect the opposite relationship to hold, as intuitively it may be that otherwise high quality debt may perform better on average if it is (perhaps unjustifiably) beaten down. Indeed, LGD enters all of our regression models with this sign, and as a more influential variable than suggested by this correlation, but the Z-score does not make it to any of our regression models. The remaining variables in this group are reflective of the Moodys ratings at the first point that the debt is rated. These are the Moodys Original Credit Rating Investment Grade Dummy (MOCR-IG), Moodys Original Credit Rating Major Code (MOCR-MJC; i.e., numerical codes for whole rating classes), Moodys Original Credit Rating Minor Code (MOCR-MNC; i.e., numerical codes for notched rating classes) and Moodys Long Run Default Rate Minor Code (MLRDR-MNC; i.e., empirical default rates associated with notched rating classes). The only meaningful univariate result here is the small positive correlation of 2.4 percent in the case of MOCR-IG. This variable enters significantly into all of our candidate regression models. Next we consider instrument/contractual metrics, extracted from the MURD database at the default date of the obligor. These are shown in the third from bottom panel of Table A3. Consistent with the analysis of the previous section, the correlations with RDD in this group reflect the extent to which instruments which are more senior, better secured, or in a safer tranches experience better performance of defaulted debt. The seniority rank (SR) and collateral rank (CR) codes both have negative and reasonably sized correlation coefficients with RDD, -9.6 percent and -10.0 percent for SR and CR, respectively. Percent debt below and percent debt above are positively (negatively) correlated to RDD, coefficients of 9.4 percent (-5.2 percent). And the TSI, constructed from the latter two variables as detailed in the previous section, has a significant positive correlation with RDD of 9.7 percent. TSI enters into two of our three candidate regression models. In this section we consider macroeconomic/cyclical metrics measured near the default date of the obligor. These are shown in the second from bottom panel of Table A3. These correlations are evidence that defaulted debt returns vary counter-cyclically with respect to the credit cycle, or that debt defaulting in downturn periods tends to perform better. We have measures of the aggregate default rate, extracted from Moodys Default Rate Service (DRS) database. These are lagging 12-month default rates, with cohorts formed on an overlapping quarterly basis.15 The

15 For example, the default rate for the fourth quarter of 2008 would represent the fraction of Moodys rated issuers in the beginning of 4Q07 that defaulted over the subsequent year. We follow the practice of adjusting for withdrawn ratings by subtracting one-half the number of withdrawn obligors from the number of available-to-default (or the denominator of the default rate.)

67

Variables

Model 1 Partial effect P-value 1.42E-03 1.22E-02 8.73E-03 7.21E-03 3.03E-02 1.44E-02 1.51E-03 5.22E-08 2.80E-02 3.04E-02 0.2422 -0.3659 0.0366 0.1925

Model 2 Partial effect 0.5101 2.2538 1.5085 0.2330 0.4339 0.2751 0.3843 P-value 9.35E-04 6.94E-03 6.35E-03 1.25E-02 3.75E-02 3.88E-02 1.00E-03 0.4010

Model 3 Partial effect 0.4342 2.1828 1.5468 0.2704 P-value 6.87E-03 1.36E-02 9.35E-03 9.36E-04

Intercept Moodys 12-month lagging speculative grade default rate by industry Fama-French excess return on market factor Collateral rank secured Tranche safety index Loss given default Cumulative abnormal returns on equity prior to default Total liabilities to total assets Moodys original rating investment grade One-month treasury yield Size relative to the market Market value to book value Free-asset ratio Degrees of freedom Log-likelihood McFadden pseudo R-squared (in-sample) McFadden pseudo R-squared (out-of-sample) bootstrap mean McFadden pseudo R-squared (out-of-sample) bootstrap standard error

0.3094 2.0501 1.3814 0.2554 0.4548 0.3273 0.3669 0.2653 0.2118 -0.4298

9.39E-04

6.84E-03 1.01E-02 4.76E-02 2.64E-05

0.1561 -0.4901 0.0648 0.1422 -0.2429

6.25E-02 3.36E-02 3.41E-03 5.63E-03 2.25E-02 783 -503.99 41.73% 17.77% 1.70%

959 -592.30 32.48% 21.23% 2.28%

958 -594.71 38.80% 12.11% 1.16%

Table 4 Beta-link generalized linear model for annualized returns on defaulted debt1 (Moodys Ultimate Recovery Database 1987-2008)
1 Annualized return on defaulted debt (RDD) from just after the time of default (first trading date of debt) until the time of ultimate resolution.

four versions of this are for the all-corporate and speculative grade segments, both in aggregate and by industry. All of these have a mild, albeit significant, positive linear correlations with RDD. The Moodys All-Corporate Quarterly Default Rate (MACQDR), having a 6.7 percent correlation with RDD, is one of the systematic risk variables to enter the candidate regression models. The next set of variables represent measures of aggregate equity and money market performance, the Fama and French (FF) portfolio returns commonly used in the finance literature, measured on a monthly basis in the month prior to instrument default.16 These are excess return on the market (FF-ERM), relative return on small stocks (FF-ERSS), and the rel17

are aggregate interest rates, the one-month treasury bill yield and the ten-year treasury bond yield, which exhibit moderate negative correlation to RDD of -10.2 percent and -7.0 percent, respectively. However, only the one-month treasury bill yield appears in the final regressions. The intuition here may be that defaulted debt performs better in low interest rate environments, which is associated with lower aggregate economic activity, as well as a higher marginal utility of consumption on the part of investors.20 The final set of variables that we consider in this section are duration/vintage metrics, based on calculations from extracted dates in the MURD database. These are shown in the bottom panel of Table A3. We can conclude from this section that the duration/vintage measures that would be in ones information set at the time of instrument default are largely

ative return on value stocks18 (FF-ERVS). We see that RDD is somewhat positively associated with aggregate return on the market factor FF-ERM, having a modest correlation of 7.2 percent.19 Similarly, RDD is positively but weakly related to FF-RRSS, a correlation of only 2.8 percent. On the other hand, RDD seems to have a small negative correlation to FF-RRVS of -4.3 percent. We have one more aggregate equity market variable, two-year stock market volatility, defined as the standard deviation of the S&P 500 return in the two years prior to default, which shows a modest positive linear correlation to RDD of 5.7 percent. Note that FF-ERM is the only of these aggregate equity market variables to enter significantly 68 in the multiple regression models. Another set of systematic variables

16 These can be downloaded from Kenneth Frenchs website: http://mba.tuck.dartmouth.edu/ pages/faculty/ken.french/data_library.html 17 This is more commonly termed the small minus large (SML) portfolio [Fama and French (1992)]. 18 This is more commonly termed the high minus low (HML) portfolio, meaning high versus low book-to-market ratio [Fama and French (1992)]. 19 Results for the S&P 500 return, not shown, are very similar. 20 The term spread, or the difference in a long and short term treasury yield, was neither significant on a univariate basis nor in the regressions. This held across several different choices of term structures. Consequently, we do not show these results.

The Capco Institute Journal of Financial Transformation


Empirical Analysis, Trading Strategies, and Risk Models for Defaulted Debt Securities

uninformative regarding the performance of defaulted debt. The variables that we have chosen to display include time from origination to default, time from first rating to default, time from last cash pay date to default, time from default to emergence, and time from origination to maturity.

(-592.3 versus -594.7) in spite of having one less explanatory variable. However, as these models are not nested this may not be so meaningful a comparison. Overall, we deem these to signify good fit, given the nonlinearity of the problem, the relatively high dimension, as well as the high level of noise in the RDD variable.

Multivariate regression analysis of defaulted debt returns


In this section, we discuss the construction and results of multiple regression models for RDD. In order to cope with the highly non-normal nature of the RDD distribution, we turn to the various techniques that have been employed in the finance and economics literature to classify data in models with constrained dependent variables, either qualitative or bounded in some region. However, much of the credit risk related literature has focused on qualitative dependent variables, which the case of probability-of -default (PD) estimation naturally falls into. Maddala (1991, 1983) introduces, discusses, and formally compares the different generalized linear models (GLMs). Here we consider the case most relevant for RDD estimation, and the least pursued in the GLM literature. In this context, since we are dealing with a random variable in a bounded region, this is most conveniently modeled through employing a beta distribution. Consequently, we follow Mallick and Gelfand (1994), in which the GLM link function21 is taken as a mixture of cumulative beta distributions, which we term the beta-link GLM (BLGLM) [see Jacobs and Karagozoglu (2011) for an application of the GLM model to estimating the ultimate LGD]. The coefficient estimates and diagnostic statistics for our leading three models are shown in Table 4. These are determined through a combination of automated statistical procedures22 and expert judgment, where we try to balance sometimes competing considerations of statistical quality of the estimates with the sensibility of the models. Essentially, the three models shown in Table 4 had the best fit to the sample data, while spanning what we thought was the best set of risk factors, based upon prior expectations as well as the univariate analysis. Note that there is much overlap between the models, as Model 2 differs from Model 1 by two variables (it has MV/BV instead of TL/TA, and has RSIZ), and Model 3 from Model 2 by two variables (FAR in lieu of TSI and LGD). Across the three candidate models, we observe that all coefficients estimates attain a high degree of statistical significance, in almost all cases at better than the 5 percent level,23 and in many cases at much better than the 1 percent level. The number of observations for which we had all of these explanatory variables is the same for Models 1 and 2 (968), but there is a sizable drop-off for Model 3 to only 792 observations. In all cases, the likelihood functions converged to a stable global maximum.24 Model 3 achieves the best in-sample fit by McFadden pseudo r-squared of 41.7 percent, followed by Model 2 (38.8 percent), and Model 1 (32.5 percent). In terms of maximized log-likelihood, Model 3 is far better than the others (-504.0), and Model 1 is only slightly better than Model 2

We now turn to the signs and individual economic significance of the variables, note that we report partial effects (PEs), which are akin to straight coefficient estimates in an ordinary least squares regression. Roughly speaking, this represents a change in the dependent variable for a unit change in a covariate, holding other variables fixed at their average sample values.25 First, we consider the systematic risk variables. In the case of the Moodys speculative default rate by industry, appearing in all models, we see PEs ranging in 2.05-2.25. This implies that a percentage point elevation in aggregate default rates adds about 2 percent in return on defaulted debt on average, all else equal, which can be considered highly significant in an economic sense. For example, the near quadrupling in default rates between 1996 and 2001 would imply an increase in expected RDD of about 12 percent. On the other hand, the PEs on the one-month treasury yield are in the range of -0.49 to -0.37, so that debt defaulting when short-term rates are about 2 percent higher will experience close to 1 percent deterioration in performance, ceteris paribus. Second, across all three regression models, RDD has a significant (at the 5 percent level) and positive loading on the FF-ERM, with PEs ranging from 1.38 to 1.55, implying that a 5 percent increase in the aggregate equity market return augments defaulted debt returns by about 6 percent. Next, we consider the contractual variables. The dummy variable for secured collateral has PEs ranging in 0.23-0.27 across models, suggesting that the presence of any kind of security can be expected to augment expected RDD by about 25 percent, which is an economically significant result. The TSI, appearing only in Models 1 and 2, has a PE ranging in 0.43-0.45, suggesting that going up a single decile in this measure can increase RDD by anywhere between 4 percent to 5 percent.

21 In the terminology of GLMs, the link function connects the expectation of some function of the data (usually the random variable weighted by density, in the case of the expected value) to a linear function of explanatory variables. 22 To this end, we employ an alternating direction stepwise model selection algorithm in the mass( ) library of R statistical software. There were five candidate leading models that were tied as best, we eliminated two of them that we judged to have economically unreasonable features. 23 Moodys investment grade rating in Model 3 is on the borderline, having a p-value of 0.06, just shy of significance at the 5 percent level. 24 The estimation was performed in S+ 8.0 using built-in optimization routines. 25 See Maddala (1981) for a discussion of this concept in the context of probit and logit regressions.

69

Model 1 Mean Zero transaction costs 1 bp per month round trip transaction costs 0.0051 0.0032 P-value 3.65E-03 7.76E-02 Mean 0.0049 0.0019

Model 2 P-value 2.79E-04 7.90E-03 Mean 0.0062 0.0025

Model 3 P-value 1.98E-03 6.45E-02

Table 5 Excess abnormal trading returns1 of beta-link generalized linear model for annualized returns on defaulted debt2 (Moodys Ultimate Recovery Database 1987-2008)
1 We formulate a trading strategy as follows. At the time of default, if forecasted returns according to the model over the expected time-to-resolution in excess of returns of returns on equity in the three months prior to default are positive (negative), then we form a long (short) position in the debt. Abnormal excess returns are then measured relative to a market model (3-factor FamaFrench) from the time of default to resolution. 2 Annualized return on defaulted debt (RDD) from just after the time of default (first trading date of debt) until the time of ultimate resolution.

Turning to the credit quality/market variables, for LGD at default, only in Models 1 and 2, PEs are about 0.28-0.33, implying that a 10 percent lower expected recovery rate by the market at default can lead to about a 3 percent higher expected RDD. The dummy variable for a Moodys investment grade rating at origination, appearing in all models, has PEs ranging from 0.16 in Model 3 to 0.24 in Model 2. This tells us that fallen angels are expected to have about 15-25 percent better return on their defaulted debt. On the other hand, the single relative stock price performance variable CAR, in all three models, has PEs ranging in 0.37-0.40. This says that, for example, a firm with 10 percent better price performance relative to the market in the 90 days prior to default will experience about 4 percent better return on its defaulted debt. In the case of the financial ratios, TL/TA appears only in Model 1, having a PE of 0.27. This means that the debt of a defaulted firm having 10 percent higher leverage at default will have about 3 percent greater return on its debt. MV/BV appears in Models 2 and 3, with respective PEs of 0.19 and 0.14, so that a 10 percent higher market valuation translate on average into nearly a 2 percent better return on defaulted debt. Finally in this group, the cashflow measure FAR only appears in Model 3, with a PE of -0.24. This implies that if a defaulted firm has 10 percent greater cash generating ability by this measure, then holding other factors constant its RDD should return about 2.5 percent less. Finally, the size of the firm relative to the market appears in only Models 2 and 3, with PEs of about 0.06 to 0.04. As this is in logarithmic terms, we interpret this as if a defaulted firm doubles in relative market capitalization, we should expect its RDD to be augmented by around 5 percent, all other factors being held constant. In order to settle upon a favored or leading model, we perform an out-of-sample and out-of-time analysis. We reestimate the models for different subsamples of the available data, starting from the middle of the dataset in year 1996. We then evaluate how the model predicts the realized RDD a year ahead. We employ a resampling procedure (a non70 parametric bootstrap), sampling randomly with replacement from the

development dataset (i.e., the period 1987-1996), and in each iteration reestimating the model. Then from the year ahead, we resample with replacement (i.e., the 1997 cohort), and evaluate the goodness-of-fit for the model. This is performed 1000 times, then a year is added, and this is repeated until the sample is exhausted. At the end of the procedure, we collect the r-squareds, and study their distribution for each of the three models. The results of this show that the mean out-of-sample r-squared in Model 1 is highest, at 21.2 percent, followed by Model 3 (17.8 percent), and Model 2 (12.1 percent). On the basis of the numerical standard errors (on the order of 1-2 percent), we deem these to be significantly distinct. Given the best performance on this basis, in conjunction with other considerations, we decide that Model 1 is the best. The other reasons for choosing Model 1 are its parsimony relative to Model 2, and that it contains a credit market variable (LGD), the latter we believe makes for a more compelling story. Note that this procedure is robust to structural breaks, as the model is redeveloped over an economic cycle, in each iteration the same variables are chosen, and the models display the same relative performance over time. Finally, in Table 5, we evaluate the economic significance of these results. We formulate a trading strategy as follows. At the time of default, if forecasted returns according to the model over the expected time-to-resolution exceed cumulative excess of returns equity in the three months prior to default, then we form a long position in the debt, else we form a short position on the defaulted instrument. Abnormal excess returns are then measured relative to a market model (3-factor Fama-French) from the time of default to resolution. The results show excess abnormal returns, in this defaulted debt trading experiment, of around 5-6 percent (2-3 percent) assuming zero (1bp per month) round-trip transaction costs. These are statistically significant, and understandably lower and having higher p-values when we factor in transaction costs. Also, results are not highly differentiated across models, with Model 3 performing about 1 percent better assuming no transaction costs, and Model 1 having a similar margin of outperformance relative to the other models assuming transaction costs. Given that the latter is arguably a more realistic scenario, we still favor Model 1 because it generates superior excess returns in this trading strategy.

The Capco Institute Journal of Financial Transformation


Empirical Analysis, Trading Strategies, and Risk Models for Defaulted Debt Securities

Conclusion
In this paper, we have empirically studied the market performance of a long history of defaulted debt. We examined the distributional properties of the return on defaulted debt (RDD) measure across different segmentations in the dataset (i.e., default type, facility type, time period, seniority, industry), and developed multiple regression models for RDD in the generalized linear model (GLM) class. We found that defaulted debt returns vary significantly according to certain different factors. There is some evidence that RDD is elevated for debt having better collateral quality rank or better protected tranches within the capital structure; and for obligors rated higher at origination, larger in market capitalization relative to the market, more financially levered, or having higher cumulative abnormal returns on equity (CARs) at default. However, RDD is increasing in market implied loss severity at default (loss given default LGD). We also find evidence that returns vary countercyclically, as they are positively correlated with industry default rates. Furthermore, they are inversely related to short-term interest rates, and positively related to returns on the equity market. We identify a leading econometric model of RDD that performs well out-of-time and out-of sample. Finally, we document the economic significance of these results through excess abnormal returns, in a debt-equity arbitrage trading experiment, of around 5-6 percent (2-3 percent) assuming zero (1bp per month) round-trip transaction costs.

References

Acharya, V., S. T. Bharath, and A. Srinivasan, 2007, Does industry-wide distress affect defaulted firms? evidence from creditor recoveries, Journal of Political Economy, 85, 787-821 Altman, E. I., 1989, Measuring corporate bond mortality and performance, Journal of Finance, 44, 909-922 Altman, E. I. and S. Jha, 2003, Market size and investment performance of defaulted bonds and loans: 1987-2002, Working paper, New York University Stern School of Business Altman, E. I. and B. Karlin, 2010, Special report on defaults and returns in the high-yield bond and distressed debt market: The year 2009 in review and outlook, NYU Solomon Center Report, February Altman, E. I., A. Resti, and A. Sironi, 2003, Default recovery rates in credit risk modelling: a review of the literature and empirical evidence, Working paper, New York University Solomon Center, February Barco, M., 2007, Going downturn, Risk, September, 38-44 Basel Committee on Banking Supervision, 2003, International convergence on capital measurement and capital standards, Bank for International Settlements (BIS), June Basel Committee on Banking Supervision, 2005, Guidance on paragraph 468 of the framework document, BIS, July Basel Committee on Banking Supervision, 2006, International convergence on capital measurement and capital standards: a revised framework, BIS, June Carey, M. and M. Gordy, 2007, The bank as grim reaper: debt decomposition and recoveries on defaulted debt, Working paper, Federal Reserve Board Davydenko, S. and I. Strebuleav, 2002, Strategic behavior, capital structure and credit spreads: an empirical investigation, Working paper, London Business School Dullman, K. and M. Trapp, 2004, Systematic risk in LGD an empirical analysis, Working paper, University of Mannheim Emery, K., R. Cantor, D. Keisman, D. and S. Ou, 2007, Moodys Ultimate Recovery Database: special comment, Moodys Investor Service, April Fama, E. F. and K. R. French, 1992, The cross-section of expected stock returns, Journal of Finance, 47, 42-480 Frye, J., 2000a, Collateral damage, Risk, April, 91-94

Frye, J., 2000b, Collateral damage detected, Federal Reserve Bank of Chicago, Emerging Issues Series, October, 1-14 Frye, J., 2000c, Depressing recoveries, Risk, 13:11, 108-111 Frye, J., 2003, A false sense of security, Risk, August, 63-67 Giese, G, 2005, The impact of PD/LGD correlations on credit risk capital, Risk, April, 79-84 Gilson, S., 1955, Investing in distressed situations: a market survey, Financial Analysts Journal, November-December, 8-27 Gordy, M., 2003, A risk-factor model foundation for ratings-based bank capital rules, Journal of Financial Intermediation, 12, 199-232 Guha, R., 2003, Recovery of face value at default: empirical evidence and implications for credit risk pricing, Working paper, London Business School Hillebrand, M., 2006, Modelling and estimating dependent loss given default, Risk, September, 120-125 Hotchkiss, E. S. and R. M. Mooradian, 1997, Vulture investors and the market for control of distressed firms, Journal of Financial Economics, 43, 401-432 Hu, Y. T. and W. Perraudin, 2002, The dependence of recovery rates and defaults, working paper, Bank of England Hotchkiss, E. and R. Mooradian, 1997, Vulture investors and the market for control of distressed firms, Journal of Financial Economics, 43, 401-432 Jacobs, Jr., M., 2011, Empirical implementation of a 2-factor structural model for loss-givendefault, Journal of Financial Transformation, 31, 31-43 Jacobs, Jr., M. and A.K. Karagozoglu, 2011, Modeling ultimate loss-given-default on corporate debt, Journal of Fixed Income (Forthcoming) Keenan S. C., D. T. Hamilton, and A. Berthault, 2000, Historical default rates of corporate bond issuers: 1920-1999, Moodys Investors Services, January Keisman, D. and K. van de Castle, 2000, Suddenly structure mattered: insights into recoveries of defaulted debt, Corporate ratings, commentary, Standard and Poors Maclachlan, I., 2004, Choosing the discount factor for estimating economic LGD, Working paper, Australia and New Zealand Banking Group, Ltd Maddala, G. S., 1983, Limited dependent and qualitative variables in finance, 3rd ed., Cambridge University Press Maddala, G. S., 1991, The perspective on the use of limited-dependent and qualitative variables models in accounting research, The Accounting Review, 66, 788-807 Mallick, B. K. and A. E. Gelfand, 1994, Generalized linear models with unknown link functions, Biometrika, 81, 237-245 Merton, R. C., 1971, Optimum consumption and portfolio rules in a continuous-time model, Journal of Economic Theory, 3, 373-413 Merton, R. C., 1974, On the pricing of corporate debt: the risk structure of interest rates, Journal of Finance, 29, 449470 Pykhtin, M., 2003, Unexpected recovery risk, Risk, August, 74-79 Rosch, D. and H. Scheule, 2005, A multifactor approach for systematic default and recovery risk, Risk, September, 62-75 Schuermann, T., 2003, What do we know about loss given default? in Shimko, D., ed. Credit risk models and management, 2nd Edition, Risk Books Vasicek, O., 1987, Probability of loss on a loan portfolio, Working paper, KMV Corporation Vasicek, O., 2002, Loan portfolio value, Risk, December, 160-162

71

Bankruptcy Cnt Sub-population of Moodys recoveries database having trading price of debt at default Bonds and term loans RDD Time-toresolution2 Principal at default3 Bonds RDD Time-toresolution2 Principal at default3 Revolvers RDD Time-toresolution2 Principal at default3 Loans RDD Time-toresolution2 Principal at default3 Total RDD Time-toresolution2 Principal at default3 Bonds and term loans Bonds Time-toresolution2 Principal at default3 Discounted LGD3 Time-toresolution2 Principal at default3 Revolvers Discounted LGD3 Time-toresolution4 Principal at default5 Loans Discounted LGD3 Time-toresolution4 Principal at default5 Total Discounted LGD3 Time-toresolution4 Principal at default5 3500 1338 702 2162 1322 485 250 837 1072 Average 28.32% 1.7263 207,581 25.44% 1.4089 205,028 26.93% 1.4089 205,028 32.57% 1.4089 193,647 28.05% 1.6663 207,099 1.6982 2798 149,623 48.57% 1.7786 157,488 39.47% 1.3944 131,843 40.03% 1.4089 127,586 45.31% 1.6373 146,057 4,585 0.83% 0.0290 5,608 1.47% 0.1062 21,396 1.08% 0.0330 5,521 0.66% 0.0221 4,064 0 4,600,000 -69.78% 0.0027 100.00% 9.3151 345 Std Err of the mean Minimum Maximum Cnt 3.47% -100.00% 0.0433 9,043 0.0027 893.76% 9.0548 59 Average 45.11% 6.65% 416,751 44.22% 47 0.2044 432,061 10.32% 17 0.0027 246,163 26.161% 29 18.12% 291,939 37.33% 76 0.0522 378,593 0.2084 433 204,750 14.50% 0.2084 204,750 18.00% 117 0.1490 124,199 17.20% 205 0.1812 124,671 15.25% 550 0.1958 187,615

Out-of-court Std Err of the mean Minimum Maximum Cnt 19.57% 3.33% 65,675 21.90% 0.0786 72,727 4.61% 0.0000 78,208 18.872% 9.96% 78,628 15.29% 0.0260 54,302 0.0261 16,469 1.37% 0.0292 18,450 2.76% 0.0476 17,836 2.03% 0.0375 14,739 1.13% 0.0026 13,576 -91.87% 0.27% 846.73% 144.38% 1131 Average 29.19% 1.6398 218,493 26.44% 884 1.3194 207,647 25.88% 267 1.3194 207,647 32.21% 514 1.2458 199,192 28.56% 1398 1.5786 216,422 1.4986 3231 0 3,000,000 -27.66% 0.0027 100.00% 3.8767 2507 157,011 43.83% 1.5620 166,781 36.40% 819 1.2165 130,751 37.00% 1.2458 127,199 41.23% 1.4415 151,701

Total Std Err of the mean Minimum Maximum 3.44% -100.00% 0.0425 9,323 0.0000 893.76% 9.0548

163 4,600,000 893.76% 9.0548

6,330 2,250,000 -91.87% 0.0027 846.73% 1.4438

0 4,600,000 893.76% 9.0548

3.75% -100.00% 0.0436 10,590 0.0548

3.74% -100.00% 0.0427 10,325 0.0027

0 4,000,000 893.76% 9.0548

6,330 2,250,000 -0.04% 0.0027 61.18% 0.0027

0 4,000,000 893.76% 9.0548

7.74% -100.00% 0.0798 19,378 0.0548

7.26% -100.00% 0.0776 18,786 0.0027

0 4,000,000 893.76% 9.0548

32,000 1,250,000 -91.87% 0.0027 532.76% 2.8959

0 4,000,000 893.76% 9.0548

5.71% -100.00% 0.0548 11,336 0.0027

5.49% -100.00% 0.0743 16,088 0.0027

0 4,000,000 893.76% 9.0548

24,853 1,750,000 -91.87% 0.0000 846.73% 1.4438

0 4,000,000 893.76% 9.0548

3.17% -100.00% 0.0384 8,194 0.0253 0.0027

3.11% -100.00% 0.0376 8,351 0.0239 4,553 0.78% 0.0275 5,551 1.35% 0.0407 #### 0.99% 0.0309 5,171 0.61% 0.0208 3,972 0.0000

0 4,600,000 0.0027 9.3151

0 2,250,000 0.0027 3.8767

0 4,600,000 0.0027 9.3151

0 4,600,000 -69.78% 0.0027 100.00% 9.3151

Entire population of Moodys recoveries database

0 4,600,000 -69.78% 0.0027 100.00% 9.0548

0 3,000,000 -3.58% 0.0027 100.00% 2.8959

0 4,600,000 -69.78% 0.0027 100.00% 9.0548

0 4,000,000 -69.78% 0.0027 100.00% 9.0548

347 1,250,000 -27.66% 0.0027 100.00% 2.8959 1543

0 4,000,000 -69.78% 0.0027 100.00% 9.0548

0 4,000,000 -69.78% 0.0027 100.00% 9.3151

347 1,750,000 -27.66% 0.0027 100.00% 3.8767 4050

0 4,000,000 -69.78% 0.0027 100.00% 9.3151

0 4,600,000

0 3,000,000

0 4,600,000

1 Return on defaulted debt: annualized simple rate of return on defaulted debt from just after the time of default (first trading date of debt) until the time of ultimate resolution. 2 Total instrument outstanding at default. 3 The time in years from the instrument default date to the time of ultimate recovery.

Table A1 Characteristics of return on defaulted debt (RDD)1 observations by default and instrument type (Moodys Ultimate Recovery Database 1987-2010)

72

The Capco Institute Journal of Financial Transformation


Empirical Analysis, Trading Strategies, and Risk Models for Defaulted Debt Securities

Revolving credit/term loan Avg (%) -96.0 77.5 20.2 24.5 114.8 -100.0 25.6 -100.0 32.4 132.0 -41.7 40.0 106.0 23.2 N/A 19.8 106.1 N/A N/A 22.6 Std Err (%) 4.0 68.3 23.4 28.5 17.0 N/A 30.4 N/A 6.9 73.6 58.3 15.4 70.7 26.8 N/A 7.9 N/A N/A N/A 18.2

Senior secured bonds Avg (%) N/A N/A N/A 23.9 N/A 29.3 161.5 41.9 33.4 63.8 -60.7 65.2 6.9 24.1 -24.7 -27.7 4.9 28.6 143.1 23.9

Subordinated bonds Avg (%) N/A N/A N/A N/A N/A N/A N/A N/A 86.5 N/A N/A N/A N/A 119.6 N/A 23.7 3.5 N/A N/A N/A Std Err (%) N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A 4.9 22.3 N/A N/A N/A

Senior unsecured bonds Avg (%) N/A N/A N/A N/A N/A N/A N/A N/A N/A 57.4 N/A N/A N/A -46.0 N/A 31.0 439.4 N/A N/A N/A

Senior subordinated bonds Avg (%) N/A N/A N/A N/A N/A N/A 72.1 N/A N/A N/A N/A N/A N/A N/A N/A 15.7 -21.8 N/A N/A N/A

Total instrument Avg (%) -96.0 77.5 20.2 24.4 114.8 -35.3 48.4 34.0 32.6 101.8 -53.1 52.9 38.6 24.3 -24.7 23.6 44.3 28.6 143.1 22.6

Collateral Type Guarantees Oil and gas properties Inventory and accounts receivable Accounts receivable Cash Inventory Minor collateral category Most assets Equipment All assets Real estate All non-current assets Capital stock PP&E Second lien Other Unsecured Third lien Intellectual property Intercompany debt Cash, accounts receivables and guarantees Major collateral category Inventory, most assets and equipment All assets and real estate Non-current assets and capital stock PPE and second lien Unsecured & Other Illiquid Collateral Total unsecured Total secured Total collateral

Cnt 2 2 28 5 2 1 6 1 363 4 2 36 8 21 0 32 1 0 0 39

Cnt 0 0 0 2 0 1 1 17 36 2 3 38 17 17 1 3 1 2 1 2

Std Err N/A N/A N/A 40.6 N/A N/A N/A 8.9 23.8 110.9 35.1 19.1 17.4 18.4 N/A 36.6 N/A 43.9 N/A 40.6

Cnt 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 452 7 0 0 0

Cnt 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 158 1 0 0 0

Std Err N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A 10.2 N/A N/A N/A N/A

Cnt 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 117 2 0 0 0

Std Err N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A 11.1 2.5 N/A N/A N/A

Cnt 2 2 28 7 2 2 8 18 400 7 5 74 25 40 1 762 12 2 1 41

Std Err 4.0 68.3 23.4 21.6 17.0 64.7 28.1 11.5 6.6 48.3 27.0 12.3 26.3 16.2 N/A 4.0 39.2 43.9 N/A 17.4

8 367 38 29 33 32 482 514

-5.8 33.5 35.7 46.1 22.4 19.8 33.0 32.2

30.3 6.9 15.0 27.6 8.1 7.9 5.8 5.5


1

19 38 41 35 7 3 139 161

47.5 35.0 56.0 14.3 17.4 27.7 38.0 36.6

10.2 22.9 18.6 12.3 28.5 36.6 9.1 8.4

0 1 0 1 459 452 9 461

N/A 86.5 N/A 119.6 23.4 23.7 25.6 23.7

N/A N/A N/A N/A 4.8 4.9 22.6 4.8

0 1 0 1 159 158 3 161

N/A 57.4 N/A -46.0 33.6 31.0 150.3 33.2

N/A N/A N/A N/A 10.4 10.2 147.6 10.3

1 0 0 0 119 117 3 120

72.1 N/A N/A N/A 15.1 15.7 9.5 15.6

N/A N/A N/A N/A 10.9 11.1 31.4 10.8

28 407 79 66 777 762 636 1398

33.2 33.8 46.2 29.0 24.1 23.6 34.5 28.6

11.7 6.6 12.0 13.9 3.9 4.0 4.9 3.1

Table A2 Return on defaulted debt (RDD ) by seniority ranks and collateral types (Moodys Ultimate Recovery Database 1987-2010)
1 Annualized return on defaulted debt from the time of default until the time of ultimate resolution.

73

Category

Variable

Count

Minimum

Median

Mean

Maximum

Std err of the mean 2.33% 2.71% 0.75% 1.18% 0.70% 30.65 3.01% 0.92% 17.19% 1.26% 11.18% 0.0599 0.42% 0.22% 0.84% 0.1938 0.0188 0.56% 0.54% 0.53% 0.0804 0.83% 0.0070 0.0588 0.0142 0.0254 0.48% 0.45% 0.40% 0.09% 0.09% 7.20% 6.00% 5.75% 0.46% 3.61% 0.12% 0.0631 0.0075 0.0208 0.1111

Correlation with RDD 17.20% 18.50% 11.91% -8.97% -2.42% -3.32% -5.91% -6.50% -4.31% -6.42% 2.48% 8.60% -4.36% -2.92% 10.30% -4.04% -2.98% 8.76% 9.44% 8.68% -8.75% -11.28% 12.40% 3.63% -9.60% -10.00% 9.36% -5.16% 9.70% 6.68% 6.40% 7.22% 2.81% -4.27% -10.22% -7.00% 5.70% 0.57% 4.49% -13.41% -0.85%

P-value of correlation 4.32E-04 6.34E-05 4.20E-04 2.34E-03 6.11E-04 8.33E-04 1.47E-03 1.62E-03 1.07E-03 1.07E-03 1.14E-04 1.31E-03 1.05E-03 7.02E-04 2.42E-03 5.07E-04 3.74E-04 1.10E-03 1.19E-03 1.09E-03 2.49E-03 2.38E-04 2.37E-04 5.01E-04 2.28E-04 5.29E-04 1.18E-03 6.48E-04 1.04E-03 1.47E-03 1.41E-03 3.02E-04 3.52E-04 5.35E-04 2.26E-03 1.69E-03 1.37E-03 7.68E-05 5.63E-04 1.70E-03 1.14E-04

Book value total liabilities/book value total assets Financial statement and market valuation Market-to-book (market value assets/book value assets) Intangibles ratio (book value intangibles/book value assets) Free asset ratio Free cash flow/book value of total assets Cash flow from operations/book value of total assets Retained earnings/book value of total assets Return on assets Return on equity One-year expected return on equity One-month equity price volatility Equity price performance Relative size (market cap of firm to the market) Relative stock price (percentile ranking to market) Stock price trading range (ratio of current to 3 Yr high/low) Cumulative abnormal returns (90 days to default) Number of instruments Capital structure Number of creditor classes Percent secured debt Percent bank debt Percent subordinated debt Altman Z-score Credit quality/credit market LGD at default Moodys original credit rating investment grade dummy Moodys original credit rating (minor code) Seniority rank Instrument/ contractual Collateral rank Percent debt below Percent debt above Tranche safety index Moodys all-corporate quarterly default rate Moodys speculative quarterly default rate Macro/cyclical Fama-French excess return on market factor Fama-French relative return on small stocks factor Fama-French excess return on value stock factor Short-term interest rates (1-month treasury yields) Long-term interest rates (10-month treasury yields) Stock-market volatility (2-year IDX) Time from origination to default Durations/ vintage Time from last cash-pay date to default Time from default to resolution Time from origination to maturity date

1106 1106 773 941 1006 1014 1031 1031 1031 1106 1106 1106 1106 1106 1171 4050 4050 4050 4050 4050 793 1433 3297 3342 4050 4050 4050 4050 4050 1322 1322 4050 4050 4050 1322 1106 1106 3521 4050 4050 3521

38.00% 44.00% 0.00% -95.51% -107.64% (669.12) -757.97% -159.12% -2950.79% -132.00% 13.00% -17.3400 0.47% 0.00% -127.70% 0.0000 0.0000 0.00% 0.00% 0.00% -8.5422 -8.50% 0.0000 3.0000 1.0000 1.0000 0.00% 0.00% 0.00% 0.00% 1.31% -1076.00% -2218.00% -912.00% 6.00% 332.00% 4.00% 0.2500 0.0000 0.0027 0.1000

115.00% 123.00% 18.34% 9.24% -1.41% (0.48) -25.80% -8.52% 3.10% -80.00% 209.00% -12.7200% 11.00% 0.71% 0.00% 6.0000 2.0000 47.79% 44.53% 41.67% 0.3625 59.00% 0.0000 14.0000 1.5000 6.0000 9.92% 0.00% 50.00% 7.05% 7.05% 77.00% 31.00% 54.00% 32.00% 535.00% 9.00% 2.9096 0.2384 1.1534 7.5890

137.42% 152.61% 21.02% 5.89% -10.77% 57.09 -61.21% -22.18% 23.11% -72.40% 259.49% -13.0487 13.76% 2.95% -4.87% 9.9511 2.4669 47.13% 45.23% 43.26% -0.3258 55.05% 0.2014 12.4054 1.7262 4.5879 25.89% 21.41% 52.24% 7.14% 7.16% 31.06% 20.15% 79.35% 31.75% 538.42% 10.03% 4.0286 0.3840 1.4415 8.9335

392.00% 673.00% 87.85% 95.86% 34.61% 7,778.00 56.32% 36.35% 6492.67% 161.00% 6116.00% -6.9300 81.00% 88.00% 147.14% 80.0000 7.0000 100.00% 100.00% 100.00% 4.6276 99.87% 1.0000 20.0000 7.0000 6.0000 100.00% 100.00% 100.00% 13.26% 13.26% 1030.00% 843.00% 1380.00% 79.00% 904.00% 19.00% 29.9534 4.3808 9.3151 50.0329

Table A3 Summary statistics on selected variables and correlations with RDD1 (Moodys Ultimate Recovery Database 1987-2010)
1 Annualized return on defaulted debt (RDD) from just after the time of default (first trading date of debt) until the time of ultimate resolution.

74

PART 1

Systemic Risk, an Empirical Approach


Gonzalo de Cadenas-Santiago Economic Research & Public Policy Department, Banco
Santander

Lara de Mesa Economic Research & Public Policy Department, Banco Santander Alicia Sanchs Economic Research & Public Policy Department, Banco Santander1

Abstract
We have developed a quantitative analysis to verify the extent to which the sources of systemic risk identified in the academic and regulatory literature actually contribute to it. This analysis shows that all institutions contribute to systemic risk albeit to a different degree depending on various risk factors such as size, interconnection, unsubstitutability, balance sheet, and risk quality. From the analysis we conclude that using a single variable or a limited series of variables as a proxy for systemic risk generates considerable errors when identifying and measuring the systemic risk of each institution. When designing systemic risk mitigation measures, all contributing factors should be taken into account. Likewise, classifying institutions as systemic/non-systemic would mean giving similar treatment to institutions that could bear very different degrees of systemic risk, while treating differently institutions that may have very similar systemic risk inside. Consequently, we advocate that some continuous approach to systemic risk in which all institutions are deemed systemic, but to varying degrees, would be preferable. We acknowledge that this analysis may prove somehow limited given that it is not founded on a predefined conceptual
2 3 1 We would like to thank our colleagues at the Economic Research Department, the Financial Supervision and Investors Relations Division and our colleagues at the general intervention division for their helpful comments. Special thanks to Alejandra Kindeln and Antonio Cortina for their invaluable comments and support. The authors would also like to thank the Economic Capital Models and Risk Metrics Department for their invaluable help providing the validation analysis of the model. We thank Jordi Gual and Jose Peydrs for their helpful comments in the discussion of this paper at the XXVII Moneda y Crdito Conference held in Madrid on November 3rd, 2010. We would also like to thank Mauricio Pintos collaboration during his stay at the Santander premises as well as Nyla Karsan for her supervision and help. Such as the quality of supervision, the regulatory framework, and the crisis management framework of each jurisdiction. That is the case of interconnectedness, where only theoretical work has been put in place and scarce variables to approximate it are available.

approach, does not fully consider other very relevant qualitative factors,2 and accounts only for some of the relevant sources of systemic risk in the banking system.3 These limits are currently set due to data availability and the current state of empirical research, but we believe that these should not hinder our work in identifying the true sources of systemic risk and our aim to help avoid any partial and thus limited prudential policy approach.

75

Motivation and main objectives


Academic and regulatory literature defines systemic risk as the risk of a major dysfunction or instability of the financial system caused by one or more parties, resulting in externalities whose potential effects pose a severe threat to both the financial system and the real sector [Acharya (2009)4, FSB/IMF/BIS (2009)]. International regulatory bodies have identified the need to control such risk as a priority in order to guarantee the financial stability of the system and avoid distortions that could prevent the efficient assignment of savings, investment, consumption, and the correct payment process within the economy [Vesala (2009)]. All those functions are nowadays carried out by the financial system. The negative externalities associated with the potential risk stemming from financial activity justified the introduction of a specific prudential supervisory framework. However, this framework is insufficient to address the effects of systemic dimensions such as capital quality and quantity, liquidity management, risk inherent to trading activities, etc. Basel III is a new framework designed to fill in the gaps and mitigate the weaknesses identified to date and the Financial Stability Board (FSB) and the Basel Committee on Banking Supervision (BCBS) are considering ways of modulating regulatory pressure on systemic risk at both the system and institutional level. This would result in heavier penalties for those deemed to be more systemic. The report by the FSB/IMF/BIS shows the enormous variety of ap5

and increased cost of the banking business with questionable benefits for the stability of the financial system. This article aims to contribute to the debate by building a proxy for systemic risk for each institution and for the system itself and discussing its potential uses and shortcomings. It will be based on criteria gathered from academic and regulatory literature (FSB, BIS, IMF and ECB) that relates to the following factors: size, interconnectedness, unsubstitutability, and other equally important sources of systemic risk such as the strength of institutions (measured in terms of balance sheet quality and composition), the liquidity and financing position, and the quality of risk management. Our goal is to extract the information about systemic risk contained in variables identified by regulators and academics as those most contributing to systemic risk. We will address this by creating a synthesis indicator to use as a proxy for the latent systemic risk nature of each financial institution within the sample. This will enable us to assess the contribution of each risk variable to global systemic risk, evaluate the contribution of each institution to risk, assess the degree of systemicity of each institution through time, and to give a wide measure of systemic risk for the entire system through the cycle. The aim of this analysis is two-fold: on the one hand it is to explain how a simplistic approach to systemic risk may fail to provide a correct classification of banks according to their systemic nature, and on the other to give an insight into the alternative areas prudential policy could take care of when addressing systemic risk (leverage, interconnectedness, etc.). Following this line, we find that though far from perfect, the conclusions that yield from our analysis are robust against any possible methodological flaws since we emphasize the value of the approach from a policy perspective, not methodological. We deem it very important to contribute to a theoretical foundation for the approach to this matter but also believe that it will take years if not decades to consolidate a proven and unified theoretical approach. We are aware that our analysis may fail because it does not rely explicitly on a defined conceptual framework, however we have tried to incorporate all the possible approaches found in the literature by means of including the variables that academics and regulators use to set their conceptually sustained views of systemic risk. Given that our indictor is equivalent to a weighted average of other variables that the literature finds related to systemic risk, we believe that our indicator could be interpreted as a summary measure of these variables, too. Since each of these variables are related to systemic risk within an specific conceptual framework, we

proaches to systemic risk that are currently used by the different domestic supervisors, which ranges from sophisticated models to simple synthetic indicators. The report concludes that the nature of systemic risk is both time varying and multi-factor led. This renders its analysis the more challenging. In this context, considering qualitative aspects of banks that could mitigate factors of risk are very appealing to both regulators and academics. The novelty and complexity of the topic suggest adding some type of ex-ante experts view in order to better supplement the identification process of our analysis. The disparity of approaches, together with the complexity of the problem, and the data availability shortages, may encourage regulators to take excessively simplistic solutions to tackle the analysis, such as limiting the number of variables to define systemic risk or to define closed lists of institutions characterized as systemic. Giving too much weight to a limited number of variables (like using solely size as proxy for risk) simply because they are easier to gather and measure and ignoring other equally important sources of risk could not only result in a very partial view of the problem, but also have very severe consequences for the financial system itself. Doing this could disregard the benefits of diversification and impede the financial sector from reaping the full benefits of economies of scale and scope. The unintended consequence could be a 76 loss of strength and stability of the financial system and the atomization

Acharya (2009) offers an overview of the regulatory and positive aspects of systemic risk, under which he identifies it with the risk of the failure of the system as a whole arising from the correlation of the elements (assets) on the balance sheets of institutions. Guidance to assess the systemic importance of financial institutions in BIS-FMI-FSB (2009).

The Capco Institute Journal of Financial Transformation


Systemic Risk, an Empirical Approach

believe that by summarizing them into one it helps in gauging some parts of those frameworks used. We have decided not to take a unique and explicit conceptual approach because we think that every framework is conditioned by its specific nonclosed and non-observable representation of systemic risk and will thus remain subject to constant change and revision over the years.

more general definition of a systemic event as that generated by one or more financial institutions and that has severe consequences on the financial and real sectors. These documents presented a wide range of approaches for the analysis that still coexist today with the different strands of the literature, such as: network analysis, risk-based portfolio models, stress testing, and scenario analysis. A vast amount of literature attributes systemic risk to one of the sources mentioned above (size, interconnectedness, unsubstitutability, and balance sheet quality8) and uses one of the alternative approaches detailed below. Bartman et al. (2009) use three methods to quantify the probability of the systemic failure of the world banking system: correlations between returns on assets, the Merton Structural Method (1974), and the inference of PDs from options prices. Their findings state that the probability of a banks crisis leading to a systemic crisis is small. Tarashev et al. (2009) use the Shapleys Value to attempt to identify the impact of different determining factors (balance sheet characteristics of an institution) in the probability of the latter causing a crisis of similar characteristics. Brunnemeier (2009) uses market data and proposes a measurement (CoVar) for calculating the incremental risk produced by an institution, measured as the difference between the VaR of the system, conditional on the fact that an institution is in stress, less the VaR of the system in normal conditions. Huang et al. (2009) measure the systemic risk of leading financial institutions by using an approximation of such risk called distress insurance premium, which is equivalent to the premium that the banking sector would pay to cover itself against a potential loss exceeding a specific share of the liabilities of the system (as an insurance franchise). The premium is calculated with PDs and using the correlation data among balance sheet assets. Acharya et al. (2009) use market data to identify systemic risk with the expected contribution from each institution to a systemic event (systemic expected shortfall, SES). And they measure it as the cost to the banking sector when it becomes infracapitalized, weighted by the contribution made by the institution to such event. The marginal contribution of each bank is linked to a rate requiring it to internalize the negative externality of its participation in the system. Their document has a prudential focus. Leaven and Valencia (2008) find common patterns in the crisis that they identify as systemic; significant signs of financial stress such as bank runs, bank liquidations, or close of wholesale market financing accompanied by large-scale public sector intervention in the system. However, the number of cases considered systemic is scarce. Even in the latter cases, their frequency, intensity,

Overview of existing research into the identification and measurement of systemic risk
The analysis of risk as a problem caused by an institution capable of causing damage to the general system has led to a widespread academic and regulatory debate. The macro-prudential identification, assigning, measurement, and management of the systemic nature of institutions either focus on the implications of being too big, interconnected or important to fail; measurement methodologies (CoVar, Systemic Risk shortfall, exposure analysis, etc.); or variety in the use of market data versus balance sheet data. This has resulted in different ways of identifying risk, its causes, and the policies needed to address it. It bears mentioning that a systemic event is by nature non-observable and as such requires the use of proxies to identify it. As stated in the memorandum of the Cross Border Financial Stability Group6 [CBFSG (2008)], the identification of a systemic event is changeable over time and cannot just be based on quantitative criteria. There is no straightforward way of identifying a factor as being systemic, but it is always conditional on a specific economic, regulatory, or financial endowment that jointly with such factor may result in the externality produced leading to a general system failure. Moreover, systemic risks are not immutable; they depend on time and context and as such may appear or disappear in a given institution. This makes it very difficult to identify a systemic crisis and its connection to specific characteristics of the banking system at a given moment. That is the reason why our analysis does not attempt to seek causal relations between regressors and a particular independent variable seemingly accounting for risk. Although an ad-hoc quantitative criterion linking characteristics to systemic events cannot be established, the experts view in the literature may help to confine what we understand as systemic risk in its different formats (collected in different variables). That such risk has systemic consequences will not only be difficult to measure for the reasons mentioned above, but will always be conditional on the definition we have given to a systemic event and the proxy we have chosen to identify it with. Seminally, the Bank of Englands Financial Stability Review (2001) published some criteria for selecting a large and complex institution as a candidate for being systemic, although it offered no definition of systemic
7

6 7

risk as such. Jointly, the BIS/IMF/FSB in their paper Guidance to assess the systemic importance of financial institutions (2009) and the European Central Bank (ECB)s Financial Stability Review (2006) provided a
8

Group made up of supervisors, central banks, and ministers of finance of the European Union. A large and complex financial institution (LCFI) is systemic if it is within the top ten in one or more of the following categories: equity book runners, bond bookrunners, syndicated loan bookrunners, interest rate derivatives outstanding, and holders of custody assets. The Squam Lake Working Group on Financial Regulation (2009) also makes reference to variables such as solvency, balance sheet composition, and liquidity of institutions.

77

and duration changes over the time, is heterogeneous and difficult to link with the characteristics of the institutions making up the banking system of that period. This paper presents a pragmatic approach to systemic risk through the use of quantitative analysis. Owing to the aforementioned difficulties in isolating factors that explicitly provide information on systemic risk, the methodology includes all the variables identified as significant based on our approach, with the intention of obtaining the most robust, efficient, and least biased results possible. To that end, the variables commonly identified as sources of systemic risk have been used. Owing to the difficulty in specifying variables, sample, and method, the most general approach possible has been undertaken in the selection of variables and the sample of banks used, allowing for the processing of the information as objectively as possible. The methodological criterion followed to obtain the indicator is the maximization of the information contained in the sample and that such information remains structurally invariant overtime.

likely to be bailed out. Typically, size variables focus on the size of the balance sheet either as a whole or each of its parts (assets and liabilities). Frequently, size indicators are based on assets, deposits, and/or credit. In this analysis we will use variables of the three making them relative to world GDP. We do this to render each measure comparable with each other. Since we identify size as a source of systemic risk, these indicators are expected to have a positive weighting on the total risk indicator. The quality of the balance sheet as was made clear by the Squam Lake Working Group (2009) plays a very important role, too, when defining systemic risk. A weak and unstable balance sheet makes an institution vulnerable to market, credit, and liquidity risks. Barrel et al. (2009) found evidence of an increased probability of a systemic banking crisis whenever solvency, leverage, and liquidity are below the desired levels. In this sense, it sounds sensible to direct policy to guarantee enough loss absorption capacity, sufficient liquidity, a correct risk administration, and a sound ability to organically generate capital. The literature divides balance sheet information into variable groups that

The main weakness of our approach is the inherent lack of an unbiased and robust indicator of systemic risk. However, our analysis is based on a theoretical and conceptual approach whose validity is commonly accepted by academics and policymakers as conventional wisdom. The methodology we have chosen particularly addresses the limits of CoVar, which relies on market information, fails to account for procyclical effects on institutions, and implicitly incorporates the probability of banks being bailed out. Other approaches, such as network analysis, are data intensive, computationally costly, and usually imply the introduction of strong assumptions about system structure.

have gained/lost prominence over the years, in line with the nature of the various emerging distresses during the lifecycle of the recent crisis. The crisis began as a liquidity issue and then turned into a problem of adverse selection when nobody knew the true quality of the assets in the balance sheets. In the latter parts of the crisis, financial distress evolved into a solvency issue as such, for which it was not only necessary to rely on existing capital buffers (static solvency) but also on the ability to generate capital organically9 (dynamic solvency).

Solvency indicators
An increase in these indicators augments balance sheet stability and should result in lowering the risk of the institution. In this synthesis indicator we would expect risk elasticity to solvency levels to be negative, that is, we would expect our indicator to drop whenever our solvency variables increase. The literature identifies the following ratios as especially significant:

Variables accounting for systemically important sources of risk


There are different dimensions throughout the systemic nature of risks assumed by an institution that may be analyzed. Table A1, in the Appendix, lists those variables classified by dimension that have been used in creating our indicator. They depict the generally accepted sources of systemic risk. Size is the main criterion found in the literature. Many believe that size is a direct channel for systemic risk to pass on to the real sector and the rest of the financial system. Crisis within a large financial institution will have a relatively bigger impact for the entire system. This proportionally bigger impact comes from the greater difficulty in replacing the vital functions provided by these institutions and to find a solution for orderly wind down (i.e., more difficult to find a buyer) as well as from its greater interconnections with the rest of the financial system. Size is also associated with riskier balance sheet structures as they face lower funding cost 78 just because creditors consider them as systemic and therefore more

Core tier 1 resembles the amount of pure capital held by an institution and therefore gives information about its maximum reaction capacity in a going concern situation. It is generally made up of equity and retained income.

Tier 1 takes into consideration the additional capital with which an institution may deal with its solvency before going into liquidation. The additional capital consists of preferred shares whose nature, incentives, and consequences are less able to absorb losses.

In this sense, the strength of its results (what we refer to here as performance indicators) gain true significance.

The Capco Institute Journal of Financial Transformation


Systemic Risk, an Empirical Approach

Leverage Ratio (and corrected leverage ratio in respect of offbalance sheet assets) both ratios refer to level of leverage an institution has to deal with.

connection with the rest of the financial system. We would, therefore, expect a positive contribution to systemic risk.

Off-balance sheet assets the aforementioned variables do not give sufficient consideration to the contribution made to solvency by off-balance sheet assets to the institutions financial circumstances. It is, therefore, necessary to include this variable, since it reflects the greater level of flexibility that it provides an institution. Off balance sheet items are guarantees, committed credit lines, and other contingent liabilities. Other types of off balance sheet items, which are the normally referred to as riskier off balance sheet items, such as structured products, are never available under balance sheet data.

Risk management quality indicators


These work as proxies of the business concern for a correct risk administration.

Short-term debt to total debt this variable accounts for the risk of refinancing, which could be acute in a general funding crisis, as was experienced during the recent crisis. The (negative) sign of this variable could be conditioned by the fact that during the sample period it becomes more difficult to get access to long term market funding. This suggests that factors limiting the issuance of debt have a greater impact on short term debt that cannot be refinanced than medium and long term debt. Accordingly, the ratio of short term versus total debt also decreases. This effect should be more pronounced for those institutions that have greater refinancing difficulties. Consequently, a lower ratio could be positively correlated with systemic risk in crisis times when market discipline is exaggerated, as opposed to what we would expect in normal times when maturity mismatches could result in the buildup of huge liquidity risks.

Liquidity indicators
The outbreak of the crisis was identified on several occasions with a temporary lack of market liquidity, either due to the interbank failure (in general for the entire banking sector at the beginning of 2009), or because of the absence of a wholesale market in which to liquidate structured assets (i.e., Lehman), or because of bank runs in the retail sector (i.e., Northern Rock). The following are found among the indicators most frequently used in available literature:

Net interest margin a culture that focuses on maximizing earnings is deemed to help mitigate risk. The elasticity of the system to this factor is interpreted as negative. Net interest margin is not net of loan loss provisions (LLP). We acknowledge that this could pose problems for comparability, but believe that the effect of LLP is already considered in the ROE item of the model. On the other hand, detracting LLPs from the margin could bring some degree of distortion to our analysis because LLPs are not homogenously accounted for in every bank.

Deposits to assets in principle, deposits are a relatively stable source of financing in normal times, although such stability may vary in terms of the type of deposit and the depositor, as well as the level of coverage. However, in times of crisis, deposits may become more volatile.

Money market and short-term funding to assets financing in wholesale markets should in principle pose a greater systemic risk, inasmuch as they are a source of interconnections in the financial system. On the other hand, the ability to finance itself in these markets gives a bank greater diversification in its sources of financing and therefore contributes to its financial strength. Since the effect of greater interconnection is taken up by the variable for financing in the interbank market, it cannot be ruled out that the contribution of this variable to systemic risk will be negative, as it picks up the additional (risk reducing) effect mentioned.
10

The unsubstitutability of certain strategic services may increase the negative externality produced by its disappearance. Unsubstitutability depends on an institutions monopoly position in a certain market and the complexity of the service it offers. A service that is difficult to replace is that of clearing and custody. This paper documents unsubstitutability as the degree to which an institution contributes infrastructure to the financial system. In this case we address the implications related to clearing and settlement infrastructure. We approximate this variable with the proportion of assets safeguarded by custodian banks over the total volume of assets under custody. We understand that this indicator should contribute to the total risk of the system and therefore have a positive weighting. We advise caution with data of this type, as the source from which they are extracted is not the balance sheet but voluntary contributions to the Global Custody Net. This source of data does not take into

Asset mix indicators


As a measure of balance sheet quality, the available literature identifies credit risk and market risk using two indicators:

Total gross lending to assets this indicator reflects the weight of the traditional businesses on a banks balance sheet. We would expect a negative contribution of this variable to systemic risk.

Share of on-balance sheet securities this reflects the weight that market activities can have on the banks balance sheet. These activities, as well as being more volatile, present a greater degree of inter10 We have tested this by estimating the indicator in absence of the interbank financing variable and the result was as expected.

79

consideration the nested positions of accounts under the custody of one unit within another, therefore the data contributed by some institutions will contain a certain level of bias. The variables relating to the system of payments also offer information on the degree of unsubstitutability of a specific institution. From the perspective of the ECB, banks, in terms of the unsubstitutability of their payments function in the system, are either considered systemically important (SIRP) or not (PIRP). However, the ECB does not provide any metric that could be used within the scope of this analysis. As stated before some indicators of the degree of concentration of market services may also have a reading for the degree of substitutability that these have. In this vein, although we have grouped some indicators under the size category, the information that they carry also refers to the degree of substitutability though only in a very rough and approximate way since only size and not concentration factors have been taken into account. The current literature also distinguishes between the factors that attribute risk indirectly,
11

recommend caution in this regard, we also consider that other variables also indirectly contribute additional information on interconnection and therefore filter the possible distortion introduced by the infra-specification of the model. The custodian nature of a bank also provides information about the degree to which it is interconnected with other institutions. This is in addition to the interconnections via the value of the assets of other banks that are held in its custody.

Performance indicators
Performance indicators take into consideration the contribution of an institution to the capacity of the system to organically generate liquidity and resources to remain solvent. Returns and efficiency indicators provide insights into the stream of flows that could be retained in order to increase capital. They are also important signals that the market tracks when banks seek to raise capital from them.

Return on average equity and return on average assets both indicators should reduce the systemicity of a banking institution represented in the market, as they are indicators of strength. We expect the elasticity of risk to these components to be negative.

that is to say, factors that are transmitters of the

Cost-to-income/efficiency it is understood that the greater the efficiency of an institution, the more its capacity to retain profit and generate capital organically. Efficiency weights negatively on total risk. Hence, cost-to-income should contribute positively.

externality produced when the risk becomes systemic. These variables are gathered together as indicators of the interconnectedness among the items making up the financial system. In practice, market indicators are used, such as those shown on the attached Table A1, or alternatively, balance sheet data. We could only approximate this variable by means of the level of interbank exposure of each institution and acknowledge that this approximation may be relatively partial. The importance of interconnectedness may be underestimated in our analysis for this reason but it is the furthest we can get using this type of data and approach. Most of the exercises in this vein are inevitably theoretical or simulation-based due to the absence of data and require large computational workload, thus falling beyond possible scope of this analysis. We have chosen to introduce an interconnectedness variable based on the degree of exposure of each bank to the interbank market. We understand that this variable gives information on the amplifying effect of its own risk to the rest of the market, and we therefore believe that it should have a positive weighting in our risk indicator. Furthermore, this variable can be read from a liquidity standpoint since it may be a source of instability given the dependence of the institution on the interbank market and not the real sector of the business as such. The recent crisis during which financing was almost impossible in the interbank market, is a good example of this risk. Once again, the elasticity of systemic risk to the banking interconnection indicator of an institution should therefore be positive. The shortage of variables (available as balance sheet data) that could provide information on interconnection risk could pose a specifica80 tion problem for the estimation of the indicator. However, although we

In the current literature we find other variables identified as sources of risk. In considering size, the ECB finds it important to use the number of affiliates, whilst the BIS/IMI/FSB take into consideration the market share of each institution (in loans and deposits). These variables have not been used in this analysis as they are only found at the aggregate level. The ECB finds sources of risk on the side of liquidity in contingent liabilities and additional factors arising from the asset mix related to cross-border and interbank assets. With respect to risk management quality, several approaches in academic literature find risk mitigation factors in the degree of diversification, and incremental effects in the degree of intermediation with financial derivatives (trade income) and the foreign exchange market (FX income). These variables have not been used in the analysis because they are not available in the 11 databases used (Bankscope or any other publicly available database with data for the sample used). Figure A1 in the Appendix, also shows a list of market

11 Nicholo and Kwast (2002) identify the degree of interconnection as a source of systemic risk. IMF GFSR introduces the too interconnected to fail concept. CBFSG also identifies the degree of interconnection as one of the sources of risk. Chan-Lau (2009 a,b) uses network analysis on balance sheet data to evaluate interconnection risk when one or more institutions suffer a credit or financing shock, and draws up an interconnection risk measurement that is equal to the average capital loss suffered by the banking system due to one or more types of shocks to such the system.

The Capco Institute Journal of Financial Transformation


Systemic Risk, an Empirical Approach

driven variables that tend to be used as interconnectedness proxies. These variables have not been used because of the problems mentioned and due to their availability.

creating a global indicator. This means that our analysis underestimates the systemic risk of local banks. A recognized problem of any approach dealing with bank data is that

Database
The analysis below has been undertaken using balance sheet data. The current literature documents the distortion produced on market data when the financial markets are distressed. The volatility of returns and CDSs, for example, was strongly altered by public aid programs implemented across the board during the recent crisis, programs that actually prevented policy makers from gauging the true picture of each banks real risk state. On the other hand, there is a consensus that in times of instability financial markets tend to become shortsighted and irrational. An example of this is the fact that for a long time the market attributed the same risk (CDS spread) to the bond of a Spanish multinational company (with 83 percent of its market revenues coming from outside the country) as it did to a Spanish Treasury bond. In view of these problems we consider the second alternative the most reliable way of approximating sources of risk. We recall that the variables considered are all those available indicators stated in academic and regulatory literature. With the same spirit of generality, we have chosen a sample of banks with total assets exceeding one hundred billion euros.12 Institutions for which Bankscope and Bloomberg did not have the balance sheet and earnings statement data needed have been excluded from this analysis. As a result, we have a sample of 100 of the largest banks. The estimation was carried out in a panel data fashion for the 2006 -2009 sampling period. We have focused on the results of a panel data to avoid inconsistencies arising from the size of the sample and the variability of the sources of risk from year to year. With the panel data analysis, we obtain robust estimations of factors over time, not determined by the nature of the emerging risk during a given year.

it fails to gauge other financial but not banking-based sources of risk such as those stemming from the insurance sector and other financial intermediaries. We would like to extend the scope of this analysis in the near future. Certain variables that are important for defining the indicator (ROA, interest margin, etc.) depend on local conditions which make comparison among banks located in different geographical locations difficult (i.e., interest rates or margins resulting from the tightness of the market in which each institution is located), but this does not prevent such factors from being a significant risk factor for each institution, and should therefore be considered as part of their vulnerabilities (or strength). Furthermore, we acknowledge that our dataset could also be partially influenced by public aid programs put in place, which have an impact on the balance sheets of a number of banks. However, we believe that these data are the best one can obtain and are significantly less distorted than market data. We, therefore, choose to use them. An additional issue arising from the use of data is that balance sheet data are unable to highlight latent imbalances within the banking system that ultimately result in crises. This issue is wholly uncontested, since even the market data used under the ECB indicator for systemic risk did not anticipate the looming crisis. Nevertheless, that distortion is considered in our conclusions.

Methodology
Our approach adheres to the position taken by the Bank of Spain (2010) and strives to come close to the approaches found at the ECB (2006) and the IMF (2010). These define a series of balance sheet variables to establish a systemic order score to individual banks. To this end, and with the aim of allowing the systemic relevance of each variable to the total aggregate indicator to be assigned objectively, we have employed a principal component analysis approach (PCA). We have chosen this methodology over a CoVar or network based approach, even though we consider them very useful approaches for theoretical and monitoring purposes, since they fail to provide insight on the effect of time on systemic risk sources that we attempt to address. Besides, we aim to anticipate the affordable alternatives that the regulators will consider in their assessment of the systemic nature of banks. The latter will very likely be a compound measure of the

A cautious approach
We recognize that certain information related to risk mitigating factors has not been considered in the selection of variables. This was due to availability reasons as well as the diversity of the data. The corporate structure of the different institutions also makes it impossible to compare many diversification data among the different banks. Other risk mitigation sources are the degree of diversification and the type of legal structure of each bank, elements that have proved to be fundamental pillars of the institutions that have best survived the crisis. This means that our analysis overestimates the systemic risk of highly diversified and organized banks. Variables of size have been taken relative to world GDP. We understand that this decision may limit to a certain degree the ability of the indicator to perceive the systemic risk of an institution at the local level, but this decision is justified for reasons of comparability and consistency in

12 Many of these banks (24) make up the list of banks requiring cross-border supervision, according to the FSB.

81

various systemic risk sources identified. Performing conditional PCAs was an appealing alternative but it would have implied identifying, a priori, some closed form of systemic risk. This could introduce some bias to our analysis. As stated before, we are trying to implement the general and overarching view found in the literature. This method allows us to reduce the information gathered in our sample of indicators to a much lower number of synthetic indicators that provide segments of independent information. Obtaining such indicators requires estimating the factor loads on which the original variables will rely so that they form the final indicator. Once the information has been reduced and segmented into such indicators, we resort to an expert opinion to decide which of them could be interpreted as an indicator of systemic risk. By construction, once it has been decided that a specific component contributes specific information, such as that of systemic risk, the possibility that such information could be found in another of the discarded indicators is excluded. Once the indicator is obtained, its loads are used to weigh up the characteristics gathered from each bank and its systemic score is calculated. The estimation process is detailed in the methodological appendix.

Variables13 / Factors

1SR
0.32 0.21 0.24 -0.15 -0.05 -0.13 0.34 0.13 0.02 -0.19 -0.20 0.28 -0.24 -0.34 0.08 0.22 -0.36 -0.24 0.23 4.26 22.4% 4.26

Weight of 1SR

2
-0.26 -0.41 -0.42 0.11 0.24 0.03 0.10 0.13 -0.36 0.25 -0.30 0.16 0.03 -0.03 0.13 0.17 -0.14 -0.17 0.29 2.43 12.5% 10.4

3
-0.27 -0.16 -0.29 -0.30 -0.41 -0.09 0.17 0.04 -0.12 -0.10 0.34 -0.36 -0.32 -0.17 -0.27 -0.02 -0.19 -0.12 -0.02 2.3 12.2% 24

4
0.27 0.35 -0.04 -0.12 -0.20 0.30 -0.03 0.12 -0.52 0.45 0.17 0.03 -0.12 0.14 0.25 -0.06 -0.02 -0.08 -0.16 1.4 7.6% 34.5

5
-0.02 -0.12 -0.04 -0.23 -0.17 0.23 0.37 0.19 0.07 0.01 -0.12 0.15 -0.03 0.03 -0.02 0.46 0.31 0.53 -0.19 1.3 6.9% 46

Assets Credit Size Deposits Core Tier 1 Tier 1 Off Bal. Leverage Solvency Real LR Depot/Assets Liquidity ST. Funding Gross Loans Asset Mix Securities Short T. Debt RMQ NIIM Custody Unsust. & Interc. Interbank Exp. ROAA ROAE Performance Est.Quality Cost-to-Inc.

8.0% 5.3% 6.0% 3.9% 1.3% 3.3% 8.5% 3.3% 0.5% 4.8% 5.0% 7.1% 6.1% 8.6% 2.1% 5.5% 8.9% 6.1% 7.7%

Results
Characterizing the sources of risk
By means of the PCA, we extract the first p-factors that explain about two-thirds of the information contained in the dataset. These factors are weighted sums of the systemic risk variables. From these factors, we select our systemic risk indicator candidate on the basis of the proportion of information from the original data that it summarizes and on the basis of its consistency with the preconceived idea of the characteristics that such indicator should have, both internally and externally. We are focusing the analysis on the results extracted under a panel data approach estimated through (part of) the cycle that goes along the crisis years 2006 to 2009. Examining the loads from the aforementioned indicators (Table 1), we again identify a distinctive pattern in the first indicator. We observe a unique attribute in that this indicator is a linear combination (a weighted sum) of variables whose value increases when: the ability to organically generate resources deteriorates, the activity devoted to retail business is reduced, leverage is increased, size is increased, the volume of financial assets (securities) on the balance sheet is increased, access to market funding falls, the ability to retain resources deteriorates due to efficiency costs, interbank market funding dependency is high, exposure to the interbank market increases, the number of asset under custody increases, solvency is reduced. Consequently, all the characteristics included in the current literature on the typification of a synthetic systemic risk indicator are met. Also, the indicator preserves the desirable characteristics mentioned above concerning information explained and stability. We will call this indicator the 82 synthesis systemic risk indicator (SSRI).

% of variance Determinant

Table 1 Sources of risk extracted principal components and candidates for systemic risk indicator (SR)

Direct and total impact of risk sources on the SSRI


On examination of the indicator loads, it is possible to assess the direct effects of the various sources of risk on the SSRI. We use the weighted elasticity of SSRI on each of the risk variables to derive these direct effects.14 By doing so, one would find that the risk arising from size weights are approximately 19 percent of the indicator, while solvency and liquidity strike are 26 percent. The capacity to organically generate capital in a recurrent manner contributes 21 percent, while the sound risk management policy is 14 percent. Asset mix weights are 12 percent and the risk arising from interconnectedness and unsubstitutability of services would seemingly add 8 percent to the weighting. It is important to bear in mind, however, that risk sources interact with each other throughout

13 Size variables are expressed relative to world GDP. Leverage Ratios are expressed in times. Liquidity and asset mix ratios are expressed as share of Assets. Short-term debt is expressed relative to total debt. Custody assets are rendered to total custody assets in the system. RMQ stands for Risk Management Quality. 14 We take the effect by normalizing the elasticity of the SSRI on each of the identified sources of risk.

The Capco Institute Journal of Financial Transformation


Systemic Risk, an Empirical Approach

innumerable channels. This is easily observable by having a look at the correlations among the risk variables selected for this analysis (Table A2 in the Appendix) where we find that size, solvency, balance sheet quality, and performance are significantly correlated. This means that risk factors operate not only in a direct manner but also throughout other indirect channels that reduce or amplify their direct effect according to the correlation shared with other risk variables.15 We use these correlations to assess the direct and indirect impact of each of the risk variables and find that the total impact of each variable group, taking into consideration such cross-effects, would yield: 31 percent for size, 18 percent for solvency and liquidity, 17 percent for risk management quality, 15 percent to the capacity to organically generate capital in a recurrent manner, 12 percent for balance sheet composition, and 7 percent for interconnectedness and unsubstitutability as a whole. From a policy point of view, the latter brings us to consider the cross effects among risk factors, as these may amplify or mitigate the originally envisaged major sources of risk. A policy rule that considers risk factors should strive to be as diversified as possible to remain neutral on the innumerable cross effects among variables. Consequently, taking a wide set of variables in our measure would be advisable. Likewise, we consider that from the prudential viewpoint, the systemic risk mitigation measures imposed on an institution should depend on the factors that cause such risk. Thus, certain solutions may be presented: 1. Through an increase of the levels of solvency (Core tier 1, leverage ratio, etc.) and liquidity, an option already addressed by Basel III, 2. Treatment of the rest of the sources of systemic risk that have also proved to be significant. As stated before, any policy rule should bear in mind not only the benefits but also the costs involved in such measures. These measures are not substitute but complementary measures. Enhancing balance sheet composition, risk management quality, and the ability to generate capital organically in a recurrent manner are measures as effective as others and do not involve direct costs that could restrict credit in quantity and price. On the other hand, additional capital charges could have a nil marginal effect on the level of total risk and their cost could exceed the additional stability they provide.
Upper quartile Median Lower quartile

One step -1% -28% -72%

Two steps 16% 14% -87%

Average 6% -13% -40%

Table 2 Average forward scoring error one and two steps ahead

and/or two periods forward if the load estimated in the year (t) to add to the score of each institution according to the characteristics found in year t+k16 was used. We observe that the risk scores are systematically undershot. However, the error made, if we use the average of the loads, would be much less, and that is the reason for choosing to use a central risk measurement. A panel estimate offers that central risk measure, consistent with the scores obtained year by year, which preserves the ability to correctly delimit the systemic institutions. Table 3 shows the weighted impact of each risk indicator selected every year between 2006 and 2009. As can be seen, the interpretation of the loads remains consistent throughout time, only varying in the intensity of the factors, but not in their sign. In the same way, we can also see that the indicator estimated over the years collects similar proportions of information, as reflected by trace criterion of the indicators. We see that although the significance of certain factors has changed over time, this change has been consistent with the changing nature of the crisis. For example, we observe that the weight of exposure to the interbank market in the indicator was twice as much in 2007/2008 than in 2009, when this market began to reactivate itself. And we also see that certain factors have lost importance as a result of the absence of problems arising from them, such as size, custody, and the existence of off-balance sheet assets. The changes in the composition of the indicator have only involved intensity (weight of the factors) and not sign, showing the stability of the estimation. However, the variability of such composition should be borne in mind on designing the indicator for macro-prudential purposes. The risk factors collected in 2007 (adverse selection versus balance sheet composition) were different from the 2008 risk factors (solvency crisis) or 2009 (liquidity crisis), and will therefore have a different weighting in the risk score obtained each year.

A measure of system-wide systemic risk


On having an estimated indicator by means of panel data, we can consistently apply the loads of the SSRI indicator to the variables of the banks

Systemic risk through the cycle 2006 -2009


Owing to the fact that the objective of the indicator is a prudential framework, we should bear in mind that creating an indicator based on these factors in a specific year to apply systemic measures in future periods is not forward-looking and could result in penalizing the institutions for characteristics they no longer have, or (or vice versa) reducing their systemic load, although the risk information deduced from the ratios in a specific year had worsened. Table 2 shows average band errors committed one

15 Asset volume, for instance, represents a risk in itself and through other size and balance sheet factors that add to the weighting. However, since the same variable is negatively correlated to performance and solvency variables, these detract from the final effect, too. 16 The error is the difference between the score obtained in the current year, obtained with loads estimated on the basis of balance sheet information for that year, less the same score obtained with the characteristics of the current year but with loads estimated on the basis of information existing one or two years previously.

83

Variables/factors Size Assets Credit Deposits Solvency Core Tier 1 Off balance sheet Leverage Liquidity Real LR Deposits/assets Asset mix ST. funding Gross loans RMQ Securities Short-term debt NIIM Unsubstitutability and interconnectedness Performance Custody Interbank exposure ROAA ROAE Cost-to-income Estimated quality Cumulative variance 1 2 3 4 5 Cumulative determinant 1 2 3 4 5

2009 0.32 0.21 0.24 -0.15 -0.05 -0.13 0.34 0.13 0.02 -0.19 -0.2 0.28 -0.24 -0.34 0.08 0.22 -0.36 -0.24 0.23 2009 22.4 35.2 47.4 55 63 4.6 10.4 24.0 34.5 45.5

2008 0.29 0.19 0.21 -0.25 -0.03 -0.12 0.33 0.23 -0.03 -0.14 -0.19 0.3 -0.25 -0.31 0.07 0.2 -0.33 -0.31 0.2 2008 26.4 39.7 51.0 58.9 66.5 5.0 12.7 27.2 40.9 58.8

2007 0.31 0.2 0.21 -0.17 -0.16 -0.15 0.35 0 -0.01 -0.2 -0.2 0.23 -0.23 -0.35 0.08 0.25 -0.38 -0.22 0.25 2007 24.8 37.6 50.2 58.5 66.0 4.7 11.5 27.4 43.5 62.2

2006 0.36 0.26 0.29 -0.02 -0.08 -0.19 0.33 0.13 0.08 -0.18 -0.21 0.27 -0.16 -0.34 0.08 0.24 -0.37 -0.15 0.17 2006 21.9 37.1 48.8 57.2 64.9 4.2 12.0 26.8 42.5 62.6

Average 0.31 0.21 0.23 -0.14 -0.03 -0.11 0.32 0.12 0.01 -0.19 -0.2 0.28 -0.24 -0.33 0.1 0.2 -0.35 -0.24 0.23 Average 23.9 37.4 49.4 57.4 65.1 4.4 11.8 27.5 44.0 63.8

Consistent yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes

Table 3 Risk weightings evolution along 2006-2009 for (SR)

each year. This does not just make it possible to establish intra-group bank comparisons each year, but also to check an institutions degree of systemicity over the time. Additionally, given that institutions would be comparable with themselves and with the rest of institutions over time, we are able to obtain a measure of systemicity of the entire system over the period analyzed. In Figure 1, a measure of system-wide systemic risk has been plotted. This measure is obtained by showing the median risk score found in the system along the sample period. One may distinguish how (according to our measure) banking systemic distress was rather contained in 2006 and how systemic risk increased to full-blown levels during 2008. It was only in 2009 when after massive international policy action (monetary and 84 fiscal stimulus, bail out, new regulatory regimes, accounting changes) the

risk was leveled down somehow, although the looming threat to the system is far higher than normal times (2006). This indicator offers the same reading as the systemic risk indicator plotted by the ECB in its Financial Stability Report since 2009.

Distribution and classification of banks according to systemic scoring


When we compare how the 100 banking institutions are distributed according to the chosen indicators for systemic risk (figures that are available from the authors on request), we find that the distribution of the institutions is very asymmetrical in all the variables used. The asymmetry in the separate distributions of each variable may lead to a biased selection of banks which are considered most likely to be systemic. This is even more likely when fewer criteria are used to delimit the systemic risk

The Capco Institute Journal of Financial Transformation


Systemic Risk, an Empirical Approach

1,0

Using a single variable, or a limited series of variables, as a proxy for systemic risk fails to provide a full explanation and results in numerous errors when attempting to identify and measure the systemic risk of each institution.

0,8

Based on this analysis, we advocate the following from the policy point
0,5

of view:

When designing systemic risk mitigation measures, all contributing factors need to be taken into account in the policy cost function. A prudential policy based on a single variable and on a single dimension of a variable would give rise to an incorrect classification both of systemic institutions (type 1 error) and institutions not considered systemic (type

0,3

2006

2007

2008

2009

SSRI system-wide = median of systemic charge

2 error), introducing a high expected cost in the event of crisis. Likewise, classifying institutions as systemic/non-systemic would mean giving similar treatment to institutions that might actually have very different degrees of systemic risk whilst treating very differently institutions that may present similar levels of systemic risk.

Figure 1 System-wide level of systemic risk

A continuous measure of risk is preferable to a simple systemic/nonsystemic classification; under this approach all institutions would be systemic but to varying degrees.

perimeter. When we examine what would happen if we use the criterion of size on classifying the 100 institutions as systemic when they breach median threshold we find that 16 percent of banks are classified as systemic when they are not (type I error), whilst at the same time one would be ignoring 14 percent of the sample institutions that (according to SSRI) are systemic (type II error). The expected stability cost of such a misclassification justifies the selection of a synthetic indicator that uses a richer set of risk variables to gauge all possible systemic risk sources embedded in each bank. An indicator combining all the variables, such as the SSRI, renders the distribution of banks the closest to normality (according to JacqeBera) vis-a-vis the systemic variables taken independently. This allows a scaling of the resulting banking systemic scores closer to a continuum; thus, making the setting of thresholds for qualifying systemic less discrete and less partial. These results have several policy implications. A prudential policy based on a single variable would give rise to a partial and possibly wrong classification, of both systemic and not systemic banks (type 1 and 2 errors), introducing a high expected cost in the event of crisis. Similarly, focusing on a single dimension of one variable would bias the choice towards a single source of risk, when the criterion followed indicates that the sources of such risk are multiple (i.e., in the case of size: assets, liabilities, credit risk, market risk, etc.).

Systemic risk may be addressed through different measures: reinforcing solvency and liquidity standards as already envisaged by Basel III (increasing capital levels and adjusting capital charges to riskier activities like those held in the trading book); addressing the other relevant sources of systemic risk linked to each institutions risk profile, that is improving risk management, corporate governance, and adopting a business model capable of organically generating recurrent capital; and addressing an institutions complexity, market services unsubstitutability, and interconnectedness within the system.

We consider that a purely quantitative approach has serious limitations as it does not take into account some relevant unquantifiable factors that could mitigate or exacerbate systemic risk. This analysis attempts to be as broad and objective as possible and also considers the qualitative dimension by using other quantitative indicators as a proxy for it.17 However, due to the unavailability and diversity of relevant data,18 the analysis fails to consider risk mitigation sources such as the quality of supervision, the regulatory framework, and the crisis management framework of each jurisdiction, or the degree of diversification and the legal structure of a bank. All of which are elements that have been proven to be fundamental pillars of the financial systems that have best survived the crisis. That said, we believe that this reinforces the conservative approach of our analysis, and hence its conclusions for those systems where these variables are of a high quality.

Conclusion
The main conclusions of this analysis are:

All financial institutions contribute to systemic risk, albeit to a different degree. Their contribution is driven by various risk factors embedded in multiple groups of variables such as size, interconnection, unsubstitutability balance sheet quality, and the quality of risk management.
17 That would be the case for example of risk management quality variables used here. 18 The corporate structure of the different institutions also makes it impossible to include diversification as a specific variable for the analysis.

85

A macro-prudential tool should not focus solely on quantitative indicators but consider qualitative factors as well. These factors prove crucial in defining the risk profile of an entity. The complexity, volatility of bank operations, transparency, its cross-border legal structure, and degree of exposure to financial markets and the existence of living wills are all good examples of the aforementioned qualitative dimension that could help to mitigate or exacerbate the degree of systemic risk in the system. Particularly relevant to assess systemic risk is the information provided by the recovery and resolution plans as they provide information on a crisis assumption that complements the business as usual information provided by accounting statements. Although we recognize the relevance of having sound capital and liquidity standards, we also deem very important having good internal risk management and corporate governance practices. All the latter framed within a set of close micro-prudential supervision policy scheme. In order to avoid excessive complexity at the bank level and to facilitate substitutability of vital services, we find crucial the development of recovery and resolution plans (living wills). The latter, jointly with a strengthening of international coordination for crisis management, will reduce the risk of contagion. Equally important is that macro-supervision is reinforced to take into full consideration the risks derived from interconnectedness. Finally, a prudential solution should consider not only the benefits but also the costs involved in such measures. The above-mentioned factors are equally effective as other measures, but crucially they do not involve direct costs that could restrict the viability of institutions or the availability of credit, in quality and price. Having said that, additional capital charges could have a nil marginal effect on the level of total risk and their cost could exceed the additional stability they provide.

Chan-Lau, J., 2009b, Balance sheet network analysis of too-connected -to-fail in global and domestic banking systems, IMF Working Papers, 10/107 Claessens, S., G. DellAriccia, D. Igan, and L. Laeven, 2010, Lessons and policy implications from global financial crisis, IMF Working Papers, 10/44 Cross Border Financial Stability Group, 2008, Memorandum of understanding on cooperation between financial supervisory authorities, central banks and finance ministries of the EU on cross border financial stability Cumming, C. and R. Eisenbeis, 2010, Resolving troubled systematically important crossborder financial institutions: is a new corporate organizational form required? Federal Reserve Bank of New York, Staff Report, 457 De Bandt, O. and P. Hartmann, 2000, Systemic risk: a survey, ECB working paper, 35 De Bandt, O., P. Hartmann, and J. L. Peydrs, 2009, Systemic risk in banking, Oxford Handbook of Banking, 145-160 Detragiache E. and D. Ho, 2010, Responding to banking crisis: Lessons from cross-country evidence, IMF Working Papers, 10/18 Demirgu-Kunt, A. and E. Detragiache, 2010, Basel core principles and bank risk: does compliance matter? IMF Working Papers, 10/81 Dungey, M., R. Fry, B. Gonzalez-Hermosillo, V. Martin, and C. Tang, 2010, Are financial crisis alike? IMF Working Papers, 10/14 ECB, 2006, Financial Stability Review, December and June Espinosa-Vega, M. and J. Sol, 2010, Cross-border financial surveillance: a network perspective, IMF Working Papers, 10/105 Fontene, W., W. Bossu, L. Cortavarria, A. Giustiniani, and D. Hardy, 2010, Crisis management and resolution for a European banking system, IMF Working Papers, 10/70 FSB-IMF-BIS,2009, Guidance to assess the systemic importance of financial institutions, markets and instruments: initial considerations, Gauthier, C., A. Lehar, and M. Souissi, 2010, Macro-prudential regulation and systemic capital requirements, Bank of Canada Working Paper/Document de travail, 2010-4 Huang, X., H. Zhou, and H. Zhu, 2009, A framework for assessing the systemic risk of major financial institutions, BIS Working Papers, 281 Hund, J., S. M. Bartram, and G. W. Brown, 2009, Estimating systemic risk of the financial system, Journal of Financial Economics, 86:3, 835-869 IMF, 2010, Systemic and the redesign of financial regulation, Financial Stability Report, April Kick, T., T. Koetter, and T. Poghosyan, 2010, Recovery Determinants of distressed banks: regulators, market discipline or the environment? Deutsche Bundesbank, Research Centre Discussion Paper Series 2: Banking and Financial Studies, 2010/02 Leaven, L. and F. Valencia, 2008, Systemic banking crises: anew database, IMF Working Papers, 08/224 Leaven, L. and F. Valencia, 2010, Resolution of banking crisis: the Good, the bad, and the ugly, IMF Working Papers, 10/146 Nicholo, G. and N. Kwast, 2002, Systemic risks and the macroeconomy, IMF Working Papers, 10/29 Nicholo, G. and M. Lucchetta, 2010, Systemic risks and the macroeconomy, IMF Working Papers no. 10/29 Osborne, M., 2009, Bank regulation, capital and credit supply: measuring the impact of prudential standards, Bank of England Occasional Paper Series, 36 Popov, A. and G. F. Udell, 2010, Cross-border banking and the international transmission of financial distress during the crisis of 2007-2008, European Central Bank in its series Working Paper Series, 1203 Sarkar, A. and J. Shrader, 2010, Financial amplification mechanisms and the Federal Reserves supply and liquidity during the crisis, Federal Reserve Bank of New York -Staff Reports, 431 Schoenmaker, D., 2009, The quest for financial stability in Europe, Chapters in SUERF Studies, 61-4, 25 Squam Lake Group, 2009, Reforming capital requirements for financial institutions, Squam Lake Group working papers, April Tarashev, N. and C.Borio, 2009, Attributing systemic risk to individual institutions, BIS Working Papers, 308 Tarashev, N., C.Borio, and K. Tsatsaronis, 2009, The systemic importance of financial institutions, BIS Quarterly Review, 2009 Vesala, J., 2009, How to bring systemic risk considerations into financial regulation and supervision, SUERF Studies, 2010/4

References

Acharya,V., 2009, A theory of systemic risk and design of prudential bank regulation, CEPR Discussion Papers, 7164 Acharya, V., H. L. Pedersen, T. Philippon, and M. Richardson, 2009, Measuring systemic risk, Federal Reserve Bank of Cleveland Working Paper, 1002 Aikman, D., P. Alessandri, B. Eklund, P. Gai, S. Kapadia, E. Martin, N. Mora, and G. Sterne, 2009, Funding liquidity risks in a quantitative model of systemic stability,Bank of England working papers, 372 Bank of England, 2001, Financial Stability Review, 11 Barrell, R., E. P. Davis, T. Fic, D. Holland, S. Kirby, and I. Liadze, 2009, Optimal regulation of bank capital and liquidity: how to calibrate new international standards, Financial Services Authority, Occasional Papers, 38 Basel Committee on Banking Supervision, 2010, Report and recommendations of the crossborder bank resolution group, March BdE - Associate directorate general banking supervision, 2010, The treatment of systemic institutions Beville, M., 2009, Financial pollution: systemic risk and market stability, Florida State University Law Review, 36, 245-247 Blanchard, O. J., C. Cottarelli, and J. Vials 2009, Exiting from crisis intervention policies, Fiscal Affairs, Monetary and Capital Markets, and Research Departments, Policy Papers, IMF Brunnermeier,A., 2009, CoVaR, Federal Reserve Bank of New York - Staff Reports, 348 Chan-Lau, J., 2009a, Regulatory capital charges for too connected to fail institutions: a practical proposal, IMF Working Papers, 10/98

86

The Capco Institute Journal of Financial Transformation


Systemic Risk, an Empirical Approach

Possible Risk Indicators Size

Indicator Assets to world GDP (AtGDP) Deposits to world GDP (DtGDP) Share of credit to world credit (%CtGDP) Degree of concentration of various markets (CONC) Number of transactions (NoT) Assets under custody/total assets in custody (AuC) Number of subsidiaries (NS)

Information Total assets as a percentage of domestic GDP Market share in banking deposits and lending Market share of credit Market share in payment transactions Volume of transactions engaged Volume of assets warehoused or managed

Selected

Reference EU MoU CBFS EU MoU CBFS EU MoU CBFS EU MoU CBFS PAPER

ECB ECB

Balance Structure

Balance quality Leverage ratio (equity/total assets) Leverage ratio (including off balance) Tier 1 (T1) Core Tier 1 (CT1) Off balance assets/total assets (OBAtA) Liquidity Maturity mismatch Short term liquidity indicators Long term liquidity indicators Deposits/assets (DtA) Money market and short term funding/assets (MM&STFtA) Contingent liabilities (CL) Asset mix Gross loans/assets (LtA) Securities/assets (StA) Mortgages Cross-border assets (CBA) Interbank assets (IA) Other assets (OA) Risk management culture Short-term debt/total debt (STDtTD) Non-interest income to total income (NIIR) Net interest margin (%) Trade income (TI) Performance Return on average assets (ROAA) (%) Return on average equity (ROAE) (%) Cost to income ratio (%) Capability to generate organically resources Capability to generate organically resources Measure of efficiency Interbank market weight Exposure of others to bank Exposure of others to bank Exposure of others to bank PAPER PAPER PAPER ECB ECB ECB EU MoU CBFS EU MoU CBFS EU MoU CBFS EU MoU CBFS EU MoU CBFS Infrastructure provider BoE EU MoU CBFS EU MoU CBFS EU MoU CBFS Concentration levels in services BoE Incentive alignment and risk franchising Diversification degree Measure of income diversification (stable if ok) Measure of the size of retail operations over overall operations Measure of Investment-type business PAPER PAPER ECB ECB ECB Exposure to foreign cycle Exposure to real cycle Exposure to financial cycle PAPER ECB ECB ECB ECB ECB Capturing the funding decision of the bank Capturing the vulnerability to rate volatility, especially relevant during liquidity dry up periods PAPER PAPER PAPER ECB PAPER ECB Leverage position Leverage position real Risk-based indicators Risk-based indicators Shadow risk proxy PAPER PAPER PAPER PAPER PAPER

Interconnectivity

Eonia overnight lending contribution (EOLC) Market capitalization (MC) Subordinated debt issuance (SDI) Interbank exposure/Tier 1 capital Intra-group exposures Ranking in markets in which the institution is a significant player Share in transactions volume in markets in which the institution participates Credit spreads, bond spreads, and price to book value (level and correlation)

Substitutability

Worldwide custody (and clearing and compensation) (AuC) Sectoral breakdown of deposits and lending Volume interbank activity Volume of corresponding banking Hirschman Herfindahl Index

ECB The reference was found in ECBs Financial Stability Report (2009) BoE The reference was found in BoEs Financial Stability Review (2001) PAPER The reference was found in any of the papers used so far (as from SUERF, Squam Lake, etc.)

Table A1 Candidate variables for systemic risk all held in literature1


1 Variables not selected () were due to availability reasons

87

Size

Balance quality

Asset quality

Risk management culture

Performance

Interbank exposure (interbank exp)/tie

Return on average assets (ROAA) (%)

Assets under custody /total assets in custody (%)

Return on average equity (ROAE) (%)

Share of credit to world credit (%)

Leverage ratio (sbvolnegocio) (%)

Off balance assets/total assets

Short term debt /total debt (%)

Assets to world GDP (%)

Correlation matrix/Sig. (1-tailed) Assets to world GDP (%)

100

61 0

48 1 63 0

33

-6

-36 3

-10

-28

-26

-24

34 5

-36 3 9

-40 2 -28

-51 0

-26

24

Share of credit to world credit (%) Size

61 0 48 1 33

100

-5

-40 2

-69 0 -45 1

-40 2 -28

-11

-26

48 1

-17

29

48 1

-51 0 -29

Deposits to GDP (%) Assets under custody/total assets in custody (%) Tier 1 (%)

63 0 -5

100

-4

-18

-11

-23

23

-31

15

-35 4

-2

-4

100

34 4

43 1 54 3 0 38 3

-2

18

-38 3

15

-10

-14

-9

-4

-6

-40 2

-18

100

37

-27

-63 0

56 0 14

-27

-10

24

39 3

31

Core Tier 1 (%) Balance quality

-38 3 -10

-69 0 -40 2

-45 1 -28

34 4 43 1

37 3 54 0 -27

100

11

-45 1

-13

-7

22

34 5

Off balance assets/total assets

38 3 2

100

-7

45 1

-53 0 43 0 1 10

20

23

38 3

-18

31

27

-15

Leverage ratio (%)

-28

-11

-11

-2

-7

100

82

-76 0 -57 0

13

54 0

-1

41 2

18

-41 2

Leverage ratio (%)

-26

-26

-23

18

11

45 1

82 0 43 1 -76 0

100

31

74 0

-13

53 0

26

-44 1

Asset quality

Gross loans/assets (%)

-24

48 1

23

-38 3

-63 0 56 0

-45 1 14

-53 0 20

10

100

-69

27

28

-3

-32

-34 5

Sec/assets (%)

34 5 -36 3 -40 2 3

-17

15

-57 0 31

-69 0 27

100

-38 3

-34 5 41 2

-18

-15

15

47 1

Risk management culture

Short-term debt/total debt (%)

-27

-13

23

13

-38 3

100

-10

26

-14

-32

Net interest margin (%)

-28

-31

38 3

54 0 -1

74 0 -13

-34 5

41 2 -10

100

-8

72 0

39 2 -46 1 83 0

-49 1 -18

Interbank exposure/tie

29

15

-10

-10

-7

-18

28

-18

-8

100

-39 2

Return on average assets (ROAA) (%) Performance

-51 0 -26

-48 1 -51 0

-35 4 -29

-14

24

22

31

41 2

53 0 26

-3

-15

26

72 0

-39 2 -46 2 1 -18

100

-54 0 -37 3

Return on average equity (ROAE) (%)

-9

39 3

34 5 0

27

18

-32

15

-14

39

83 0 -54 0

100

Cost to income ratio (%)

24

-2

-4

31

-15

-41 2

-44 1

-34 5

47 1

-32

-49 1

-37 3

100

Table A2 Correlation matrix of systemic risk variables


Blue marked fields mean significant correlation between variables. Black white figures mean correlation degrees significance levels

88

Cost to income ratio (%)

Net interest margin (%)

Gross loans/assets (%)

Securities/assets (%)

Deposits to GDP (%)

Leverage ratio (%)

Core Tier1 (%)

Tier1 (%)

PART 1

Price of Risk Recent Evidence from Large Financials


1
Manmohan Singh Senior Economist, Monetary and Capital Markets Department, IMF Karim Youssef Economist, Strategy, Policy, and Review Department, IMF

Abstract
Probability of default (PD) measures have been widely used in estimating potential losses of, and contagion among, large financial institutions. In a period of financial stress however, the existing methods to compute PDs and generate loss estimates have generated results that vary significantly. This paper discusses three issues that should be taken into account in using PD-based methodologies for loss or contagion analyses: (i) the use of risk-neutral probabilities versus real-world probabilities; (ii) the divergence between movements in credit and equity markets during periods of financial stress; and (iii) the assumption of stochastic versus fixed
1 The authors wish to thank Darrel Duffie, Kenneth Singleton, Inci tkerRobe, Andre Santos, Laura Kodres, Mohamed Norat, and Jiri Podpiera for their helpful comments. The views expressed are those of the authors and do not reflect those of the IMF. This paper has been authorized for distribution by Martin Mhleisen.

recovery for the assets of financial institutions. All three elements have non-trivial implications for providing an accurate estimate of default probabilities and associated losses as inputs for setting policies related to large banks in distress.

89

Measures for the probability of default (PD) of financial institutions have been widely used in estimating potential losses of, and contagion among, large financial institutions.2 However, different methodologies used to arrive at such estimates have not necessarily produced uniform results. During the recent financial crisis, two types of PDs (based on CDS spreads and Moodys KMV, respectively) have differed markedly for large banks, and the resulting loss estimates have also varied significantly. In order to properly identify policies with respect to large banks in distress, a closer review of the key differences arising from the various methods to extract PDs is necessary. Indeed, the difficulties in harmonizing the results of the methodologies discussed need to be spelled out, as they could potentially have an impact on authorities reactions and subsequent policy advice.

Adjusting probabilities: the price of risk


The price of risk can be defined as the ratio needed to convert risk-neutral probabilities (associated with CDS spreads) to real world probabilities. The recent literature on this topic converges on the methodology of Amato (2005), which proxies the conversion factor as follows: price of risk = CDS spread/equity market signal. An example of an equity market signal would be taking the Moodys KMV expected default frequency (EDF) as a real world measure. An enhancement to this would be to proxy the conversion factor by also accounting for the recovery (R) expected at default (40 percent is a common assumptions for R), that is to say, adjusted price of risk = CDS spread/EDF(1-R). The BIS Quarterly Report (March 2009) uses this approximation to show

These differences start with the underlying market signals used to calculate the PDs. Credit default swap (CDS) spreads providing signals from debt and/or credit markets given an assumed level of recovery have been used to arrive at a PD measure. By design, it is risk neutral because it does not take into account investors varying degrees of risk aversion. Risk neutrality allows us to bypass the need to calibrate a real world measure of investors utility by assuming that all investors are risk neutral. That is to say, risk neutral methods assign greater probabilities to worse outcomes. PDs derived via the risk neutrality assumption are widely accepted when pricing credit instruments, or assessing the impact of default risk on a portfolio of assets with similarly priced components. The Moodys KMV methodology, which accounts for investors risk aversion by extracting signals from equity markets to arrive at a real world measure of risk have also been used to extract PDs. In contrast to risk neutral PDs, which use only market prices as inputs, risk measures based on the real world approach also use balance sheet inputs. It is generally accepted that real world measures provide for a better approximation of investors risk aversion and are as such better suited to carrying out scenario analysis to calculate potential future losses caused by defaults [Hull (2009)]. Nevertheless, the nature of the inputs used for real world measures also provide for the potential of missing important market signals (especially during distress). The resulting implication is that losses computed from risk neutral PDs may need to be adjusted downward to arrive at the real world probabilities, while during periods of market stress, the assumptions underlying some of the models yielding real world PDs may become tenuous. The difficulties associated with the transformation of risk neutral PDs to real world PDs are discussed below, along with issues that need to be considered and explored further. In particular, in adjusting the risk neutral probabilities with a conversion factor (the price of risk), we explore the importance of: (i) deviation between credit and equity prices during periods of financial market stress and (ii) the role of the assumption of sto90 chastic versus fixed recovery for financial institution assets.

that the price of risk during the 200708 period had fluctuated from an average of about 4 to 12. In other words, risk neutral probabilities derived from CDS spreads would need to be adjusted by a large and significant factor to determine real-world probabilities. For example, if CDS spreads were implying a PD of 0.9 percent, and the associated price of risk conversion factor for a given corporate entity was 10, then the relevant adjusted PD would be 0.09 percent. The price of risk for large global banks has indeed been sizable and varies across institutions. Our results suggest that at the time of Lehman bankruptcy in September 2008, the price of risk for many large banks was about 5, and for European banks, in particular, higher than 10 in some cases (Figure 1). There have been efforts to use Moodys model to adjust real world probabilities into risk neutral measures. This is mainly done via the use of the Sharpe ratio and a correlation coefficient between individual returns and market returns. However, it should be highlighted that this framework assumes that investors treat financial and non-financial firms in a similar fashion (even during the recent crisis). Additionally, in this framework, the Sharpe ratio is updated only once a year, which presents an inconsistency with most asset allocation models, especially during the distress periods of 2008. The price of risk approach, which avoids these complications, may better reflect the transformation from risk neutral to the real world probability of default.

Divergence between credit and equity markets during the recent crisis
In transforming risk neutral probabilities to real life probabilities, the implicit assumption is the co-movement of equity and bond market signals that drive EDF and CDS spreads, respectively. However, in the case of large banks, the equity markets have been far more volatile than the bond markets since 2008. In most cases, CDS spreads for the large banks have

Probability of default or distress is used here in a broader context, to include conditional probabilities of default, joint probability of default, distance to distress, and joint default dependence (i.e., via the off-diagonal elements of the distress dependence matrix).

The Capco Institute Journal of Financial Transformation


Price of Risk Recent Evidence from Large Financials

Price of risk: American banks

Price of risk: American banks

5 55 4 44 3 33 2 22 1 11 0

5 5 5 4 4 4 3 3 3 2 2 2 1 1 1 0 Oct-08 Oct-08 Oct-08Oct-08 Oct-08 Oct-08 Jan-09 Jan-09 Apr-09 Apr-09 Jan-09Jan-09 Apr-09Apr-09 Jan-09 Jan-09 Apr-09 Apr-09 JPMorgan JPMorgan JPMorgan JPMorgan JPMorgan JPMorgan MorganMorgan Stanley Stanley Morgan Stanley Morgan Stanley Morgan Morgan Stanley Stanley

5 55 4 44 3 33 2 22 1 11 0

5 5 5 4 4 4 3 3 3 2 2 2 1 1 1 0 Oct-08 Oct-08 Oct-08Oct-08 Oct-08 Oct-08 Jan-09 Jan-09 Apr-09 Apr-09

0 0 0 0 Jul-08 Jul-08 Jul-08 Jul-08 Jul-08 Jul-08

0 0 0 0Jul-08 Jul-08 Jul-08 Jul-08 Jul-08 Jul-08

Jan-09Jan-09 Apr-09Apr-09 Jan-09 Jan-09 Apr-09 Apr-09 Citigroup Citigroup Citigroup Citigroup Citigroup Citigroup Wells Wells Fargo Fargo Wells Fargo Fargo Wells Wells Wells Fargo Fargo

Goldman Goldman Sachs Sachs Goldman Sachs Sachs Goldman GoldmanGoldman Sachs Sachs

Bank ofBank of America America Bank America America Bank of America Bank of Bank ofof America

5 5 54 4 43 3 32 2 21 1 10

5 5 5 4 4 4 3 3 3 2 2 2 1 1 1 0

Price of risk: European banks

5 5 54 4 43 3 32 2 21 1 10

5 5 5 4 4 4 3 3 3 2 2 2 1 1 1 0

Price of risk: European banks

0 0 0 0 Jul-08 Jul-08 Jul-08 Jul-08 Jul-08 Jul-08

Oct-08 Oct-08 Oct-08Oct-08 Oct-08 Oct-08 RBS RBS RBS

Jan-09 Jan-09

Apr-09 Apr-09

Jan-09Jan-09 Apr-09Apr-09 Jan-09 Jan-09 Apr-09 Apr-09 RBS RBS RBS Svenska Svenska Svenska Svenska Svenska Svenska

0 0 0 0 Jul-08 Jul-08 Jul-08 Jul-08 Jul-08 Jul-08

Oct-08 Oct-08 Oct-08Oct-08 Oct-08 Oct-08

Jan-09 Jan-09

Apr-09 Apr-09

Jan-09Jan-09 Apr-09Apr-09 Jan-09 Jan-09 Apr-09 Apr-09 Santander Santander SocGen SocGen

Barclays Barclays Barclays Barclays Barclays Barclays

BBVA BBVA

BNP ParibasParibas BNP

BNP Paribas Santander SocGen SocGen BBVA BBVA BNP Paribas Santander BNP Santander SocGen SocGen BBVA BBVA BNP Paribas Paribas Santander

2 22

2 2 2

Price of risk: European banks

20 20 20 15 15 15 10 10 10 5 55 0

20 20 20 15 15 15 10 10 10 5 5 5 0

Price of risk: European banks

1 11

1 1 1

0 Oct-08 Oct-08 Oct-08Oct-08 Oct-08 Oct-08 Jan-09 Jan-09 Apr-09 Apr-09 Jan-09Jan-09 Apr-09Apr-09 Jan-09 Jan-09 Apr-09 Apr-09 Deutsche Bank Bank Deutsche Deutsche Bank Bank Deutsche Deutsche Bank Deutsche Bank DanskeDanske Danske Danske Danske Danske UBS UBS UBS UBS UBS UBS Raiffeisen Raiffeisen Raiffeisen Raiffeisen Raiffeisen Raiffeisen

0 0 0 0 Jul-08 Jul-08 Jul-08 Jul-08 Jul-08 Jul-08

0 0 0 0 Jul-08 Jul-08 Jul-08 Jul-08 Jul-08 Jul-08

Oct-08 Oct-08 Oct-08Oct-08 Oct-08 Oct-08

Jan-09 Jan-09

Apr-09 Apr-09

Jan-09Jan-09 Apr-09Apr-09 Jan-09 Jan-09 Apr-09 Apr-09 KBC KBC

Commerzbank Commerzbank Commerzbank Commerzbank Commerzbank Commerzbank Credit Suisse Suisse Credit Credit Suisse Suisse Credit Credit Credit Suisse Suisse

Intesa Intesa Intesa Intesa Intesa Intesa Standard Chartered Standard Chartered Standard Chartered Standard Chartered Standard Chartered Standard Chartered

KBC KBC KBC Erste KBC Erste Erste Erste Erste Erste

Figure 1 Price of risk

91

Citibank U.S.
Model SD Actual SD

Deutsche Bank
Model SD Actual SD

1200 1000 800

500

400

300

600 400 200 0


Aug-08 Nov-08 Dec-08 Sep-08 Mar-09
200

100

Jan-09

Feb-09

Oct-08

Apr-09

May-09

Jun-09

0 Aug-08 Sep-08 Oct-08 Nov-08 Dec-08 Jan-09 Feb-09 Mar-09 Apr-09 May-09 Jun-09

Sources: Bloomberg and Moodys KMV

Figure 2 Volatility divergence in equity and credit markets for LCFIs

remained subdued given the perception of too-large-to-fail. Compared with non-financial firms, such as GM or Chrysler, where default (PoD) CTD bond price Probability of bondholders have recently taken a haircut and losses, bondholders of large complex
120 1 0.9 financial institutions have so far been kept whole. As a result, the varia100

proxy for stochastic recovery [Duffie (1999); Singh (2003, 2004); Singh and Spackman (2009)], as it reflects a more realistic inference of the value of debt obligations than the fixed recovery assumption. Moreover, the use of CTD is also in line with the physical settlement covenants of the
Citigroup ISDA contracts.4 In the case of Icelands Landsbanki Bank, for example,

tions in prospective returns have been reflected more immediately in the


80 equity markets, relative to the volatility of their bond prices (see Figure 2, 0.7 0.6 0.5 0.4

0.8

100

probabilities stemming from using a fixed recovery rate (green line) ver90 sus a stochastic recovery (blue line) proxied by cheapest-to-deliver bond
80 are markedly different (Figure 3).5

Goldman Sachs

which is60 derived from Bloombergs OVCR function).3


CTD bond price (left scale) PoD R=Stochastic (right scale)

fraction of par

The asymmetric signals from debt and equity markets, in turn, have 0.3 We use a distressed bond price
40 0.2 implications for estimating losses or guarantees from the implied bal20 Default 0.1 ance sheet components. Disentangling the implications of this differen0 0 tial in volatility needs to be considered in probability models. Moreover,
8 08 8
cut off of below 95% of par.

PoD R=40% (right scale)

The use of PDs with a fixed recovery assumption has implications also for assessing institutional interconnectedness through joint probability 60 of distress (JPoD). Inaccurate estimates of conditional probabilities may
50 result if the independent probabilities are biased. For example, if we are

70

3/

7/

to consider in models using distance-to-distress where debt and equity market volatilities are important variables in determining the final results (see Appendix II). Consequently, from a policy perspective, the estimates of losses and guarantees need to be interpreted with caution when the models do not account for dynamics of these relationships.

11

the asymmetry in signals from credit and equity markets is important


/2 /2 1/ /2 0/ 1/ 26 24

20

20 0

00

00

to estimate the probability of default for Goldman conditional on Citis default, incorrect estimation of the probability for Citi would contaminate Jan-08 Apr-08 Jul-08 Oct-08 Jan-09 Apr-09 Jul-09 the conditional probabilities for Goldman.6
40

Assumption between fixed and variable recovery


Models estimating PDs have commonly assumed fixed recovery values. However, stochastic recovery value assumptions may be necessary during distress episodes. Unlike sovereigns or corporates, financial institutions have few tangible assets, and recovery during the credit crisis was very different from the 40 percent assumption (Lehman and Landsbanki were roughly 8 cents and 1 cent on the dollar respectively, while Fannie Mae and Freddie Mac were both above 90 cents on the dollar). Hence, the use of a time-varying or stochastic recovery rate is all the more important in the case of distressed financial institutions. In cases where cash 92 bonds trade below par, the cheapest-to-deliver (CTD) bond is a good

Bloombergs OVCR function (equity volatility and credit risk) converts equity prices, leverage, and implied volatility to a CDS spread. This theoretical equity implied CDS spread can be compared to actual CDS spread. The OVCR function is described in Appendix 1. In most models, including those using CDS and Moodys EDF data, the general assumption has been to hold recovery value constant (in the range of 2040). The probability of default (i.e., the hazard rate) and the recovery value more or less offset each other when bonds trade near par. Such approximation works poorly when bonds trade at high spreads. To further augment the use of stochastic recovery, the cheapest priced Citi and Goldman bonds illustrate that their bond prices have traded well below par in the recent crisis, despite the implicit forbearance offered to bondholders of large financial institutions, unlike the bondholders of GM, Chrysler, or even Fannie Mae and Freddie Mac (Figure 4). See IMF Working Paper No. 08/258 (page 14, second paragraph), which states: using CDS data after Lehmans default will require the use of variable recovery value assumption, or in its absence, CTD bonds. There may be other factors such as funding costs during crisis that can contribute to probability estimates.

400

800
300

600
200 The Capco Institute Journal of Financial Transformation 400

Price of Risk Recent Evidence from Large Financials


200 0
Aug-08 Nov-08 Dec-08 Sep-08 Mar-09
100

Jan-09

Feb-09

Oct-08

Apr-09

May-09

Jun-09

0 Aug-08 Sep-08 Oct-08 Nov-08 Dec-08 Jan-09 Feb-09 Mar-09 Apr-09 May-09 Jun-09

Model SD

Model SD

Conclusion and policy implications


This paper has argued that during periods of stress, measures for the probability of default of financial institutions should be adjusted to reflect the price 100 risk and address potential divergence of credit/equity marof ket relationships and the stochasticCitigroup of asset recovery. Not taking nature
90 these elements into consideration may result in different, and perhaps
Goldman Sachs

ctual SD

500

CTD bond price


120

Actual SD Probability of default (PoD)


1 0.9

400

100

0.8 0.7 0.6

300

80

200

60

CTD bond price (left scale) PoD R=Stochastic (right scale)

0.5 0.4 0.3 0.2 Default 0.1 0

misguided, results and policy recommendations.


80

100
20

We use a distressed bond price cut off of below 95% of par.

fraction of par

40

PoD R=40% (right scale)

From a policy angle, until we have a more precise idea of the magnitude of the biases and how best to revise the existing models, loss estimates based on distance-to-distress models should be interpreted with caution 60 for large banks. Also, modeling the degree of interconnectedness of large
50 banks based on joint probabilities of distress should incorporate the low 70

0 Aug-08 Sep-08 Oct-08 Nov-08 Dec-08 Jan-09 Feb-09 Mar-09 Apr-09 May-09 Jun-09 0
08 08 08 1/ 1/ 20 3/ 26 7/ 24 /2 0/ 20 0 11 /2 0 /2 0 8

recovery rates observed in the context of the recent credit events that
Sources: Bloomberg and Fund Staff estimates, and Singh and Spackman (2009)

involved large financials, so as to avoid over- or under-estimationJul-09the of Jan-08 Apr-08 Jul-08 Oct-08 Jan-09 Apr-09 degree of connectedness.

40

Figure 3 Landsbanki, Iceland PDs from stochastic and fixed recovery assumptions

References

D)

100
Citigroup Goldman Sachs

90

80

70

60

50

Amato, J., 2005, Risk aversion and risk premia in the CDS market, Bank for International Settlements, Quarterly Review, December, 5568 Amato, J., 2009, Overview: investors ponder depth and duration of global downturn, Bank for International Settlements, Quarterly Review, March, 128 Bloomberg, Inferring default probabilities from capital structure information, Version 1.0, equity valuation and credit risk function (OVCR). Duffie, D., 1999, Credit swap valuation, Financial Analyst Journal, 55:1, 7387 Hull, J., 2009, The credit crunch of 2007: what went wrong? Why? What lessons can be learned? Journal of Credit Risk, 5:2, 3-18 Moodys, 2007, EDF 8.0 model enhancements. Segoviano, B., A. Miguel, and M. Singh, 2008, Counterparty risk in the over-the-counter derivatives market, IMF Working Paper No. 08/258 (Washington: International Monetary Fund) Singh, M., 2003, Are credit default swap spreads high in emerging markets? An alternative methodology for proxying recovery value, IMF Working Paper No. 03/242 (Washington: International Monetary Fund) Singh, M., 2004, A new road to recovery, RISK, September Singh, M. and C. Spackman, 2009, The use (and abuse) of CDS spreads during distress, IMF Working Paper No. 09/62 (Washington: International Monetary Fund)

fraction of par

40 Jan-08

Apr-08

Jul-08

Oct-08

Jan-09

Apr-09

Jul-09

Source: Bloomberg

Figure 4 U.S. banks Citi and Goldmans bond prices

93

Appendix I Bloombergs equity volatility and credit risk (OVCR) function7


Merton (1974) assumes that the value of a firm, A, follows a geometric Brownian motion. Under constant interest rate r assumption, there exists a risk-neutral measure Q under which we can write the dynamics of the firm value as, dAt/At = rdt + AdW (1)

Appendix II The Moodys KMV model8


The concept of using an option theoretic framework combined with stock prices to estimate default risk was controversial, when first developed. Consider a holding company that owns stock in another company and that the market value of these holdings is V. Further, the company has a debt payment of D due at a fixed point in time, T. Owning the equity of such a holding company is equivalent to owning an option to buy the stock at a price of D with an expiration date of T. Owning the debt is equivalent to owning a risk-free bond that pays D at time T and being short a put option on the stock with an exercise price of D and an expiration date of T. In this example, the firm defaults if the value of assets, V, is below D at the expiration date T. One can use the simple Black-Scholes option formula to determine the value of equity. The four inputs to this equation are the debt payment, D which we refer to as the default point, the market value of the firms assets, V, the volatility of assets, A , and the risk-free interest rate, r. The probability that the obligations will not be met is a function of the firms DD, which represents the number of standard deviations that the firm is away from the default point. DD can also be viewed as a volatility-adjusted market-based measure of leverage. As the VK model is a barrier model, the model relates the asset value, the default point, and volatility to the default probability via a first passage through time formula. Vasicek has noted that the probability of default for

Merton further assumes that the firms debt is in the form of a zero-coupon debt with face value, D, and a single fixed expiry date T. At time T, the firm pays off the debt if its firm value is higher than the face value of the debt and claims bankruptcy if its firm value is below the face value of the debt, A<D. The equity holder claims the remainder of the firm value. Under these assumptions, the equity of the firm can be regarded as a call option on the firm value with the strike equal the face value of the debt and the maturity being the maturity of the debt. Since the firm value follows a geometric Brownian motion, we can value the equity of the firm based on the option pricing formula developed by Black and Scholes (1973) and Merton (1973). The time-0 value of the firms equity, E0, can be written as, E0 = A0N(d1) De-rTN(d2) d1 = [lnA0/D + rT + 1/2A T] AT, d2 = [lnA0/D + rT 1/2A T] AT
2 2

(2)

(3)

a first passage through time model is approximately equal to: 2(DD), where DD is the so-called distance-to-default and is the cumulative normal distribution. Distance-to-default can be defined as: DD(V, XT, A,T, , ) = [log(V/(XT + aT)) + ( 1/2A T] AT Where V is the value of a firms assets, T X is the default point to the horizon, is the drift term, A is the volatility of assets, T is the horizon, and a represents cash leakages per unit time due to interest payments, coupons, and dividends. The value of the firms assets and volatility is computed as described above. The default point is computed as current liabilities plus a portion of long-term debt. For longer horizons, a larger portion of long-term debt is included in the default point to reflect that long-term debt becomes more important at longer horizons. Note that the DD varies considerably with the horizon under consideration. At longer horizons, the weight on volatility increases relative to the default point. Empirically, there is a strong relationship between DD and the observed
2

In particular, N(d2) represents the risk-neutral probability that the call option will finish in the money and hence the firm will not default. Therefore, 1-N(d2) denotes the risk-neutral probability of default. In principal, we can use this pricing relation to estimate the default probability, if we are willing to accept the simplifying assumptions of Merton (1974). In reality, there are several more assumptions and choices that we need to make. The equity value can be observed from the stock market if we regard the stock price as the equity value per share and ignore the added value of warrants. The stock return volatility, E, can also be estimated either from the time series data or stock options. Furthermore, the face value of the debt can be obtained from the balance sheet information if we are willing to make further simplifying assumptions regarding the debt structure and their maturities. Nevertheless, the firm value A0 and the firm volatility A are normally regarded as not observable. These two quantities can be solved from the following two equation, E0 = A0N(d1) De-rTN(d2), E = N(d1) AA0/E0 (4)

default rates firms with a larger DD are less likely to default. Nevertheless, the actual default rate found in the data differs from the literal

Where the second equation is a result of Itos lemma. These two equations contain the two unknowns (A0, A) if we can obtain estimates on (D, E0, E, r, T). Therefore, one can solve for the two unknown quantities using standard numerical nonlinear least square procedures. 94
7 8 See Bloombergs Inferring Default Probabilities from Capital Structure Information, version 1.0. See Moodys EDF 8.0 Model Enhancements.

The Capco Institute Journal of Financial Transformation


Price of Risk Recent Evidence from Large Financials

predictions of the model. Taken literally, the Brownian motion assumption on asset value implies a Gaussian relationship between DD and the EDF credit measure. Specifically, for a DD greater than 4, a Gaussian relationship predicts that defaults will occur 6 out of 100,000 times. This would lead to one half of actual firms being essentially risk-free. This implication is not found in the data. Consequently, when implementing the model, we depart from the Gaussian assumption by implementing an empirical mapping.

95

Part 2
Simulation and Performance Evaluation of Liability Driven Investment (LDI) Behavioral Finance and Technical Analysis The Failure of Financial Econometrics: Assessing the Cointegration Revolution A General Structural Approach For Credit Modeling Under Stochastic Volatility A Stochastic Programming Model to Minimize Volume Liquidity Risk in Commodity Trading The Organization of Lending and the Use of Credit Scoring Techniques in Italian Banks Measuring the Economic Gains of Mergers and Acquisitions: Is it Time for a Change? Mobile Payments Go Viral: M-PESA in Kenya

PART 2

Simulation and Performance Evaluation of Liability Driven Investment (LDI)


Katharina Schwaiger Financial Engineer, Markit Gautam Mitra Distinguished Professor and Director of CARISMA, Brunel University, and Chairman, OptiRisk Systems

Abstract
In contrast to decision models which use in-sample scenarios we use out-of-sample scenarios to conduct simulation and decision evaluation including backtesting. We propose two simulation methodologies and describe six decision evaluation techniques that can be applied to test the performance of liability-driven investment (LDI) models. A good practice of evaluating the performance of funds is to apply risk-adjusted performance measures; we have chosen two widely applied ratios: the Sortino ratio and the funding ratio. We perform statistical tests and establish the quality of the portfolios. We report our empirical findings by testing an asset and liability management model for pension funds where we propose one deterministic linear programing and three stochastic programming models. 99

The decisions suggested by the generic models proposed in Schwaiger et al. (2010) are examined and evaluated in detail through simulation studies. Two simulation methodologies and six evaluation techniques are introduced and statistical methods and performance measures are used to interpret these solutions. We start with a brief literature review of simulation and evaluation methods applied to asset and liability management (ALM) so far. The simulation and decision evaluation framework proposed is generic and can also be applied to areas other than ALM. We review briefly simulation methodologies used in portfolio planning and ALM applied to pension funds, banks, and insurance companies. A bank ALM model is introduced by Kusy and Ziemba (1986), where the aim is to find the optimal trade off between risk, return, and liquidity of the bank. A multiperiod stochastic linear programming model is introduced and then compared with the Bradley-Crane stochastic decision tree (SDT) model. The simulation study determines which model of the two gives better first-period results by testing two hypotheses: whether or not the ALM initial period profit is superior to the SDT and whether the mean profit of the ALM is superior to the SDT. Oguzsoy and Gven (1997) tested their bank ALM model in different ways. Firstly, they relaxed their management policy constraints and compared its results with the results of the fully included management policy constraints to show their importance in avoiding risk. This also allows the determination of the most important policy constraints. Secondly, they conducted sensitivity analysis on the rate of loans becoming irregular, different inflation effects, and on the rate of short term loans returning before maturity. And finally they analyzed the occurrence probabilities of possible outstanding deposit values. Zenios et al. (1998) developed a multi-stage stochastic programing model with recourse for fixed-income portfolio management under uncertainty. They show that their stochastic programing models perform well using both market data (historical backtesting) and simulated data (using Monte Carlo simulation procedures) compared to portfolio immunization models and single period models. They conclude that that these results show robustness with respect to changes in the input data. Mulvey et al. (2000) use stress testing in their pension plan and insurance ALM model for Towers Perrin-Tillinghast with generated devastating cases such as dropping equity markets and falling interest rates simultaneously. Once the desired scenario characteristic is set, the CAP:Link system (scenario generator) filters the desired scenarios. Their stochastic planning approach was compared to the current equity/bond mix of the company, an equity hedged mix, and a long-bond strategy. Kouwenberg (2001) developed a multi-stage stochastic programing ALM model for a Dutch pension fund. The model is tested using rolling horizon simulations and compared to a fixed mix model. The objective of the SP model is to minimize the average contribution rate and penalizing any deficits in the funding ratio. The rolling horizon simulations are conducted for a period of five years, where the model is resolved using the optimal 100 decisions after each year. Then the average contribution rate and the

information about underfunding is saved after each year and compared to the solution of the fixed mix model. The multistage simulation method in our present paper is comparable to this method and explained in more detail below. Their findings are that the SP model dominates the fixed mix model and that the trade off between risk and costs are better in the SP model than in the fixed mix model. Fleten et al. (2002) compare the performance of a multistage stochastic programming insurance ALM model and a static fixed mix insurance ALM model through in-sample and outof-sample testing. The stochastic programing model dominates the fixed mix model, but the degree of domination decreases in the out-of-sample testing. Only the first stage solutions are used to test the quality of the models. The models are solved, the first stage decisions are fixed, and at the next time period the models are solved again and the new first stage decisions are used. Our multistage simulation methodology also draws upon the algorithm described in their paper. More papers on decision models and simulations for asset liability management can be found in Mitra and Schwaiger (2011).

Simulation
We want to see how decision models perform and compare them in the setting of a rolling forward simulation. An individual model can be tested to exhibit its performance, but also a set of models can be compared with each other for benchmarking. We then compare the models at each time period by looking at the wealth (and/or funding status) distribution. There are two decision making frameworks we can consider: (a) fixing the first stage decisions and then only looking at the recourse actions, which means in our models we decide on the set of bonds and the initial injected cash today and then only look at the reinvested spare cash and the money borrowed from the bank in the future; or (b) we roll the model at fixed time intervals of one year, which means in our models we decide on the set of bonds and the initial injected cash today and then in the future we might not only want to reinvest spare cash or borrow cash, but we might also want to sell some bonds and/or buy new available bonds to match the liabilities even better. In later time periods some bonds might have even defaulted and we will not receive any cash flow payments from it anymore. The two decision making frameworks are called two-stage simulation and multistage simulation from now on. The in-sample scenarios over which the decision models are initialized over will be called optimization scenarios, while the out-of-sample scenarios over which the decision models will be evaluated over will be called simulation scenarios [introduced by Fleten et al. (2002)]. Fleten et al. (2002) argue that a potential error source is if the scenarios used in the simulation differ from the original scenarios of the optimization problem. The stochastic programing models adapt to the information in the scenario tree during in-sample testing. If the out-of-sample scenarios differ too much from the in-sample scenarios, then this gained information might lead to too bad or too good decisions. In our computational study

The Capco Institute Journal of Financial Transformation


Simulation and Performance Evaluation of Liability Driven Investment (LDI)

the out-of-sample scenarios are generated using random sampling from the original scenarios of the optimization problem. The distribution of the out-of-sample scenarios will be the same as the distribution of the insample scenarios. The procedures for conducting a two-stage simulation and multistage simulation is explained in the following two subsections. The main difference between the two is that in the two-stage simulation framework the decision models using the optimization scenarios is solved only once and the decisions gained from it is used within the simulation scenarios, while during the multistage simulation the decision models are rerun at subsequent times.
Figure 1 Two-stage simulation tree

Methodology (1): two-stage simulation


The aim of the two-stage simulation is to solve the models at the initial time period and see how they perform in the future under different (new) scenarios. The steps of the two-stage simulation (Methodology (1)) are as follows: (1) Generate the optimization scenarios; (2) solve the models and fix the first stage decisions; (3) generate the simulation scenarios; (4) use the fixed first stage decisions and run the model with the simulation scenarios; and (5) compute the wealth (surplus and deficit) and mismatch information at each time period Figure 1 shows graphically the two-stage simulation, where the green nodes and connecting arcs are the optimization scenarios and the red nodes and arcs are the simulation scenarios. The models are solved at the initial time point and their performance is measured under the new simulation scenarios.

Methodology (2): multistage simulation


The second methodology is used to solve the models at the initial time period but also at possible later time periods under different scenarios. The steps of the multistage simulation [Methodology (2)] are as follows [see also Fleten et al. (2002) and Kouwenberg (2001)]: (1) Generate the optimization scenarios around the market expectations (i.e., the interest rate scenarios (or mortality rates or salary curves) for periods 1 to T) and solve the stochastic models and obtain the initial decisions at time t=1; (2) generate a high number of sampled simulation scenarios around the optimization scenarios for periods 1 to T; (3) for each simulation path node, generate a conditional scenario tree; (4) if t<T, solve the models at each simulation node; and (5) compute the current wealth (surplus or deficit) and the new set of bonds, then go back to step 4. In Figure 2, again the green lines represent the optimization scenarios and the red lines represent the simulation scenarios. The generated optimization scenarios after time period one are conditional on the simulation node: the scenario generator takes all information until the current simulation node into consideration. The multistage simulation applied to our model is in a telescoping manner. The liabilities time period is fixed and reduced after each year. The first portfolio optimization has a time horizon from 1 to T, the year after the time horizon is 1 to T-1.
Figure 2 Multistage simulation tree

Decision evaluation and performance measurement


There are different types of decision evaluation techniques, where we use six types: stress testing, backtesting, in-sample testing, out-of-sample testing, scenario analysis, and what-if analysis. Stress testing is used to test the models by using extreme data. Three cases can be considered: how well will the models perform, if (a) there are low interest rates in the future, i.e., which will mean the discounted present values of liabilities are relatively larger and if (b) some bonds will default in the future, making it impossible to match all the liabilities. Another stress case is (c) a possible decrease in mortality in the future. Pension funds are using 101

investment and the more risk bearing the investor. We will be using the Sortino ratio and the funding ratio. The Sortino ratio roots from the Sharpe ratio [Sharpe (1994)], but it is widely used across the industry since it only penalizes a portfolios underperformance via downside deviation. These two ratios are used substantially, we provide the formulae in summary form: The Sortino ratio is calculated by: S RP RI / d where (1)

RP ((1+ RPi )) 1
i =1

1 N

Figure 3 Historical LIBOR data (12 month LIBOR)

1 N (min( RPi RI ,0))2 N i =1

The term d is also called the downside deviation or target semideviation. current mortality tables, whereby they know that the mortality rates will decrease in the future, making their liability stream longer and distributed differently. Both simulation methods can be used for this type of decision evaluation. For backtesting the models are run with past data and the solutions show what happens if it had been decided to implement them in the past. Using data from 2000 onwards is especially important, since in that period pension funds were experiencing higher deficits. Figure 3 shows historical data of the LIBOR from 1997 till 2007. Good results for that period will show the quality of the models. Only the multistage simulation method is appropriate for backtesting, where the model is rerun every year with the new knowledge of the world. Fleten et al. (2002) showed that the discrepancy between the dynamic stochastic approach and the fixed mix approach solutions is less in the out-of-sample testing than in the in-sample testing, this is due to the fact that the stochastic model adapts to the information available in the scenario tree in in-sample more than out-of-sample. By scenarios analysis we analyze possible future events by considering alternative scenarios. Only the twostage simulation methodology is suitable for this analysis. The models are solved for each scenario, the first stage decision are fixed and then solved again with the other scenarios. In what-if analysis we project what will happen under each scenario. Only the two-stage simulation method can be used, whereby the optimization scenarios are the same as the simulation scenarios. The performance of any fund can be measured using different risk measurements. The simplest one being the standard deviation. Fund managers look at the standard deviation of excess return over the risk-free rate or some benchmark. The standard deviation of the difference between the funds return and a benchmark return is also called the tracking error. 102 The higher the standard deviation, the higher the expected return for an
Rating AAA AA A BBB Total t=0 156 88 72 60 376 t=1 148 89 75 61 373 t=2 140 90 79 61 369 t=3 133 90 82 62 367 t=4 126 90 85 63 364 t=5 120 89 88 63 360 t=6 114 89 90 64 356 t=7 108 88 92 65 352 t=8 102 87 94 65 348 t=9 97 85 96 66 344

The funding ratio is the ratio of a pension funds assets to its liabilities; a funding ratio of greater than one suggests that the pension fund is able to cover all its obligations to its members. A funding ratio of less than one indicates that the pension fund is currently unable to meet all its obligations. Ft = Assetst/Liabilitiest (2)

Computational investigation
Computational results are given that highlight the methods of pension fund performance evaluation: (a) backtesting using data from the last decade, (b) stress testing using low interest rate scenarios, (c) stress testing using low interest rates, and (d) stress testing including the possibility of bonds defaulting. The other portfolio evaluation tests include the Sortino ratio and the funding ratio, which are both widely understood as riskadjusted performance measurements in the industry by fund managers. Table 1 shows the number of bonds available at each year within our constrained rating classes. Each year some bonds migrate to another rating class, if they fall out of our given rating range, they cannot be bought

Table 1 Number of bonds available after migration after t years

The Capco Institute Journal of Financial Transformation


Simulation and Performance Evaluation of Liability Driven Investment (LDI)

a) Two-stage simulation results of the LP model

b) Two-stage simulation results of the SP model

c) Two-stage simulation results of the CCP model

d) Two-stage simulation results of the ICCP model

Figure 4 What-if performance of the SP, CCP, and ICCP model from 1997-2006

into the portfolio anymore, i.e., if their rating is below BBB. The last row shows the total number of bonds available for our portfolio. The data is based on using average transition matrices from 1995-2005, where each matrix gives the probability of staying in or changing the rating class. The computational study is tested on the decision models proposed in Schwaiger et al. (2010), where four models are proposed for a pension fund ALM problem: a linear deterministic programming model minimizing PV01 deviations between assets and liabilities and minimizing initial

injected cash (from the members and the sponsoring company) and three stochastic programing models incorporating uncertainty around the interest rates that affect both assets and liabilities. The stochastic programing models minimize present value deviations between assets and liabilities and initial injected cash. The two-stage stochastic programing model is extended to include chance constraints and integrated chance constraints. The models have a 45 year time horizon, 376 bonds to choose from, and the stochastic models are initialized with 150 interest rate scenarios. 103

t=1 t=2 t=2

t=10

t=10 t=10

120 120 100 100 80 80 60 60 40 40 20 20 00 260808 260808 391212 391212 521616 521616 652020 652020 912829 912829 104323 104323 169525 169525 182565 182565 195606 195606 208646 208646 221687 221687 234727 234727 247767 247767 130404 130404 782424 782424 130404 130404 143444 143444 156484 156484 260808 260808 117363 117363 0 0

00

236162 236162

472325 472325

590407 590407

708488 708488

826570 826570

944651 944651

106273 106273

129889 129889

141697 141697

153505 153505

177122 177122

188930 188930

200738 200738

212546 212546 308546 308546

354244 354244

165314 165314

t=20 t=20 t=20 120 120 100 100 80 80 60 60 40 40 20 20 00 7801113 7801113 130018 130018 260037 260037 390055 390055 650092 650092 910129 910129 130018 130018 143020 143020 156022 156022 182025 182025 195027 195027 208029 208029 221031 221031 234033 234033 247035 247035 520074 520074 104014 104014 169024 169024 260037 260037 117016 117016 0 0 140 140 120 120 100 100 80 80 60 60 40 40 20 20 00 342829 342829 685658 685658 857073 857073 102848 102848

t=40

t=40 t=40

137131 137131

154273 154273

188556 188556

205697 205697

222839 222839

239980 239980

257122 257122

274263 274263

325687 325687

171414 171414

514244 514244

171414 171414

Figure 5 SP model stress testing low interest rate surplus/deficit distribution

The first set of results show the what-if analysis using the two-stage simulation methodology: Figures 4a to 4d show the performance of the liabilities and asset present values of the LP, SP, CCP, and ICCP model, respectively. The present values of the assets and liabilities are stochastic since they are discounted using the stochastic interest rates. Figure 4a shows the expected performance of the assets and liabilities of the LP model, while Figures 4b, 4c, and 4d show the asset and liability performance within each scenario. The LP model performs the worst, since it is only useful for small shifts in the yield curve, while the tests allow for large shifts in the yield curve. The SP model performs well on average, however it does not match the liabilities perfectly, although this is one of the objective functions of the model. Looking at the performances of the assets and liabilities within each scenario and plotting the difference a positive performance (surplus) can be seen in most time periods, but also deficits in other time periods. In our tests, the SP model has 104 outperformed its liabilities, but there is no guarantee that this might not

turn into underperformance in some cases, since we are minimizing both underdeviations and overdeviations at a same level. The seemingly bad performance of the CCP model can be explained as follows: the chance constraints are only included in the first three time periods, and it can be seen that the assets match the liabilities perfectly until time period five. The conclusion is to either set the chance constraints for more time periods or to set them for low time periods and rerun the model in between again. The performance of the ICCP perfectly matches its liabilities with the assets. Especially looking at Figure 4d, it can be seen that no matter what scenario occurs, the pension fund has a perfect asset/liability match (suggested by the converging straight line at zero of A-L). The next set of results are stress testing results using low interest rate scenarios: The surplus/deficit frequency distribution at four different time periods:

291404 291404

342829 342829

119990 119990

0 0

224354 224354

236162 236162

118081 118081

118081 118081

100 100 90 90 80 80 70 70 60 60 50 50 40 40 30 30 20 20 10 10 0 0

70000000 The Capco Institute Journal of Financial Transformation

Simulation and Performance Evaluation of Liability Driven Investment 60000000 (LDI)


50000000 40000000 CCP 30000000 20000000 10000000 0 1 -10000000 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 ICCP SP

t=20

t=40

70000000 60000000 50000000 40000000 CCP 30000000 20000000 10000000 0 1 -10000000 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 ICCP SP

1 0,8 0,6 0,4 CCP 0,2 0 1 -0,2 -0,4 -0,6 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 ICCP SP

Figure 6 Funding and Sortino ratios for stress testing (low interest rates) 1
0,8 0,6

t=2, t=10, t=20 and t=40 and for all four models is calculated. The reCCP sults in Figure 5 show the surplus/deficit frequency distribution of the SP 0,4

The LP model generates large deficits in the long term. The same study has been conducted using extreme events of high interest rate scenarios and the outcomes in summary are: the ICCP model never generates a deficit and outperforms the other models in terms of matching. The LP model performs far better during high interest rate times, but still does not outperform any of the stochastic programming models.

model using low interest rate scenarios. The SP model performs relatively SP
0 well (in matching terms) compared to the other models and in both cases 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43

0,2

ICCP

of high and low interest rate scenarios. -0,2 The next results of the surplus/deficit frequency distribution of the CCP model using low interest rate scenarios show that the CCP model generates a good asset/liability matching outcome, with surplus/deficit being around zero over the four time periods. It can match the asset and liability present value deviations no matter what happens to the interest rates, this is due to the restriction of underfunding events which was incorporated into the model. In low interest rate times the ICCP model guarantees fairly well a perfect matching. In low interest rate events, the discounted liabilities are higher than in high interest events; when liabilities are higher valued the ICCP model protects the pension fund from interest rate risk and generating a deficit. As expected the LP model underperforms far more than all the other models in low interest rate scenarios since it is only a static decision model and does not take into account any future uncertainty. Considering the performance of the funds low interest rate scenarios confirm our previous expectations: the SP model even performs well during low interest rates generating only low deficits at a few time periods. Due to a high reliability level the CCP only generates a surplus, which is relatively high. The ICCP model does generate a deficit at some time periods, but it is restricted and it best matches the assets with the liabilities.
-0,6 -0,4

Performance measurement
Again we wish to look at the funding ratio and at the Sortino ratio of the SP, CCP, and ICCP models during low interest rate scenarios. This is plotted in Figure 6. As expected the ICCP model has a funding ratio of one, which suggests a perfect matching of assets and liabilities at all time periods. The SP model has a desired funding ratio of one in the low interest rate scenarios. In the CCP model the assets outperform the liabilities during low interest rates. Looking now at the Sortino ratio, the CCP model has an upward sloping curve for low interest rate scenarios. The SP has a Sortino ratio close to zero during low interest rates, which means again a close PV match of the assets and liabilities. The ICCP model is close to zero during low interest rates.

Conclusions
We introduced a framework for two simulation methodologies: two-stage simulation and multistage simulation. We have also applied six different decision evaluation techniques to test the results of four pension fund ALM decision models. We report the results of the empirical study which test the models proposed in Schwaiger et al. (2010). The interest rate scenarios are generated using the CIR model and the liability stream is generated using pension mathematics practice, U.K. mortality tables, and inflation rate scenarios. Computational results which are reported highlight the methods of pension fund performance evaluation: (a) 105

backtesting using data from the last decade, (b) stress testing using low interest rate scenarios, (c) stress testing using high interest rates, and (d) stress tests including the possibility of bonds defaulting. The other portfolio evaluation tests include the use of the Sortino ratio and the funding ratio, which are both widely understood as risk-adjusted performance measurements in the industry by fund managers. Based on the empirical study we can make the following summary observations: (i) the stochastic programing models dominate the deterministic programing models when interest rate risk is considered, (ii) the chance constrained programing model and integrated chance constraint programing model need a higher initial cash injection to limit possible future deficit events, (iii) the integrated chance constrained programing model does not lead to a deficit during backtesting and stress testing while at the same time it does not lead the pension fund to severe overfunding, and (iv) from a computational performance perspective the ICCP model has similar CPU solving times to the SP model, while the CCP model leads to expensive CPU times. Pension fund managers should adopt stochastic programming models to hedge against changes in the present values of the liabilities. Although, the SP models are more costly to implement, they guarantee the model to be less exposed to a deficit. From a computational perspective fund managers can easily run the LP, SP, and ICCP models with a large asset universe and time horizon within a reasonable time.

References

Fleten, S. E., K. Hoyland, and S. W. Wallace, 2002, The performance of stochastic dynamic and fixed mix portfolio models, European Journal of Operational Research, 140:1, 37-49 Kouwenberg, R., 2001, Scenario generation and stochastic programming models for asset liability management, European Journal of Operational Research, 134:2, 279-292 Kusy, M. I. and W. T. Ziemba, 1986, A bank asset and liability management model, Operations Research, 34:3, 356-376 Mitra, G. and K. J. Schwaiger, (eds.), 2011, Asset and liability management handbook, Palgrave Mulvey, J. M., G. Gould, and C. Morgan, 2000, An asset and liability management system for Towers Perrin-Tillinghast, Interfaces, 30:1, 96-114 Oguzsoy, C. B. and S. Gven, 1997, Bank asset and liability management under uncertainty, European Journal of Operational Research, 102:3, 575-600 Schwaiger, K. J., 2009, Asset and liability management under uncertainty: models for decision making and evaluation, PhD thesis, CARISMA, Brunel Univeristy Schwaiger, K. J., C. Lucas, and G. Mitra, 2010, Alternative decision models for liability determined investment, Journal of Asset Management: special issue on ALM Zenios, S. A., M. R. Holmer, R. McKendall, and C. Vassiadou-Zeniou, 1998, Dynamic models for fixed-income portfolio management under uncertainty, Journal of Economic Dynamics and Control, 22:10, 1517-1541

106

PART 2

Behavioral Finance and Technical Analysis


Kosrow Dehnad IEOR Department, Columbia University, Quantitative Trading, Samba
Financial Group

Abstract
Behavioural finance has challenged many claims of efficient market hypothesis (EMH). Unfortunately many of these challenges are in the form of anecdotal evidence and lack quantification. This article uses market data together with some simple statistics to show that in practice certain assertions of EMH and mathematical finance can be rejected with a high degree of confidence. The working of the FX market is used to demonstrate certain shortcomings of elegant results in mathematical finance that render them irrelevant in practice. An approach based on Markov chains is developed to model certain heuristic notions such as fast market, support, and resistance, that are widely used by technical analysts and practitioners. Using market observation, it is shown that this model better fits historical data than that implied by the assumption that daily returns are independent and normally distributed. 107

Behavioral finance started as an irritating challenge to advocates of efficient market hypothesis (EMH) and was initially ignored by them. It exposed instances of investors behavior that contradicted the main tenets of EMH which assumes that investors are rational decision makers and that at each instant the prices of securities reflect all available information about them. This implies that all the efforts by security analysts and traders to beat the market are futile and that investors should simply construct a portfolio consisting of a combination of risk-free assets and the market portfolio which best suits their risk appetite and lay back and enjoy the fruits of their investments. The EMH found an ally in the mathematical finance community where the above assumptions implied that prices were a martingale process, thus enabling the researchers in that field to use the techniques of Brownian motion to prove many elegant theorems. Further, by blending some supposedly minor simplifying assumptions with their work, such as absence of bid-ask spread or taxes, or homogeneity of time, or the ability of investors to lend and borrow at the same rate and as much as they desire, theoreticians opened the floodgates of dynamic hedging and option replication that has become an industry in itself. There has, however, been growing evidence contradicting EMH such as: performance of investors such as Warren Buffet, George Soros, and Peter Lynch who have beaten the market year after year; the mini crash of 1987; the great recession of 2008, and the flash crash of 2010. Moreover, studies in behavioral finance have shown that certain hardwired biases and habits of humans make them a far cry from the rational and calculating decision makers who pounce on any opportunity to maximize their profit. As such anecdotal evidence have accumulated, exponents of EMH had to reluctantly modify their stances by introducing different flavors of efficiency, such as strong form efficiency, weak form efficiency, etc., and in all cases the statement of all available information about a stock is left as a nebulous concept. It should be pointed out that certain market practitioners, the so-called technical analyst or chartists, never paid any attention to the results of mathematical finance and EMH that are against the very grain of their work. These practitioners believe that certain price patterns repeat themselves and provide profit opportunities. Consequently, they pore over historical data and draw charts with acronyms such as support, resistance, channel, head-and-shoulder, and momentum, which according to EMH have no informational value whatsoever, in order to gain insight into market sentiment that hopefully will give them a trading edge. Justification of chartists for their approach to market is very intuitive and suffers from a lack of quantification though they use certain statistical terms such as moving averages or ratios. To justify some of these approaches psychologists have joined the foray and tried to provide an explanation for the way market behaves using psychology. Figure 1 is a cognitive psychology explanation of oscillation of prices that fall into a rising or 108 falling band called a channel.
5 20 25 30

Price

15

Lower band Higher band Price

10

10

11

12

13

14 Time

Figure 1 Channel formation

Most market observers are familiar with statements such as, the market showed a number of abrupt rises interrupted by sideways movement in the congestion area, previous buyers selling to take home profits, and the new buyers taking advantage of an opportunity to get in. Sellers, also participating all the way up, each time noted that the market reached new higher peaks and that they should have stayed firm. Small drops were therefore used to come back in, and each increase provoked new buy interests. Such descriptions that often lack hard data and statistical analysis to support them are in part responsible for the fact that the EMH camp dismiss some of the findings of behavioral finance by statements such as, are these the tip of the iceberg or the whole iceberg? Because in the absence of any statistical analysis, such descriptions of the causes of market behavior remain an interesting read at best. This article attempts to use statistical methods and market data to demonstrate that continuous time models based on Brownian motion disregard some of the basic characteristics of some markets and the behavior of their participants. This inattention has major practical implications and renders some of the results in mathematical finance a theoretical construct at best.

Homogeneity of time
One of the assumptions of mathematical finance is homogeneity of time. Namely, the evolution of asset prices in one hour is assumed to be independent of whether this one hour is from 5 to 6 pm New York time when all major financial markets are closed, or from 8 to 9 of the first Friday of the month when important U.S. economic data are released that can considerably move the markets. Practitioners pay special attention to this difference and market data also support this distinction. Formally, independent increment assumption implies that price movements in the interval [t, t + ] is a function of only and do not depend on t. This premise makes Brownian motion tools and techniques applicable in

The Capco Institute Journal of Financial Transformation


Behavioral Finance and Technical Analysis

modeling price movements. We propose the following statistics for testing this hypothesis. Let Q[t, t + ] be a certain observed quantity during time interval [t, t + ], for example the volume of trades on a certain stock, or its realized volatility, etc. For a given day i and two different periods [ti, ti + ] and [ti, ti + ] let Qi = Q[t, t + ] and Qi = Q[ti, ti + ]. If time is homogeneous, then Pr[Qi Qi] = Pr[Qi Qi] = . For example, if Q represents trade volume, the above hypothesis implies that it is equally likely for the volume from 9:30 am when the market opens till 12:30 to be more or less the same as that from 1 till to 4 pm when the market closes. Let us define Xi = 1 if [Qi Qi]; Xi = 0 otherwise. Under the assumption of homogeneity of time, Xis are iid observations with Pr[Xi = 1] =1/2. And for a sufficiently large number of observation n the statistics S = (2 Xi n)/ n will have a standard normal distribution. This result can be used to test the hypothesis of homogeneity of time. For illustration, consider the volume of trades in AT&T (T) shares between 9:30 am until 12:30, Qi=Volume[9:30,12:30] and from 1 till 4:00, Qi = Volume[1, 4], for business days from Monday, October 4th, 2010 until Friday, April 29th, 2011. During this time less than 12 percent of time the volume of the shares traded in the morning was larger than that in the afternoon. The statistics S= 9.03 that implies the null hypothesis of homogeneity of time can be rejected with p value well below 0.001.

depends on the investment horizon and trading style. In the case of the so-called global macro strategies, these factors are of secondary importance since such strategies try to capture the long-term market trends and ignore day to day price variations. On the other hand, in daily trading and market making the important issues could be the levels of stop loss (SL) and take profit (TP) and where such trigger levels should be placed relative to those of other market participants and whether these levels are bunched together creating support or resistance levels. The practical implication of these points is that a wide and supposedly safe stop loss might be triggered over night because of low liquidity and high bid-ask spread. The trigger could also be caused by sudden spikes and drops in price because of temporary imbalances in supply and demand due to say, arrival of unexpected news or large orders. Continuous time finance assumes all price movements, over a short period of time, to be infinitesimal and uses an annual volatility number to describe the distribution of these movements. In practice, arrival of a large order or a new piece of information could cause the market to spike and gap even for assets with a deep and liquid market such as JPY/USD. In particular, a sudden spike could trigger stop losses that in turn trigger additional stop loss orders that temporarily result in a destabilizing positive feedback system. For example, the continuous time finance is incapable of explaining the sudden drop of 1000 point in the Dow index in a few minutes and its subsequent recovery in a few hours on May 6th 2010 because of some computer orders by fat fingers.

Time varying nature of volatility and its practical implications


Unlike equity markets, foreign exchange (FX) markets are theoretically open 24 hours a day, 7 days a week and there is no central exchange for them. They, however, exhibit some of the same characteristics of equity markets with regards to volume, liquidity, and bid/ask spread. For illustration consider the Japanese yen market (JPY). The general course of daily trading in JPY is along the following lines: there is very little liquidity and trading activity until the Tokyo market opens and price action begins. The liquidity and price action markedly increase with the opening of London and reach their peak when New York opens and its traders also join the action. Liquidity starts to taper off with the closing of Tokyo, followed by London and New York. With the closing of New York the price action becomes minimal, particularly if it is the last working day of the week or the start of a major holiday such as New Years day. And for all practical purposes liquidity and volatility die down until Tokyo opens again and the cycle repeats itself. All practitioners are acutely aware of the above facts and adjust their trading accordingly. In other words, bid-ask spreads can be quite wide when the market lacks liquidity and it drops to one tic when the market is most active. This implies that trading time must not be treated as homogeneous and market behavior and volatility during the interval [t, t + t] do indeed depend on t. Of course, the importance of bid spread, liquidity, and volatility in trading

These sudden movements when translated into an annual volatility can result in unrealistically high numbers such as 20000 percent! To model such phenomenon or the fact the volatility changes overtime a number of techniques and models have been proposed from jump processes to ARCH and GARCH models. These models, which are mathematically elegant and sophisticated, require estimation of a number of parameters and pose the daunting task of selecting among a number of competing models that all fit the data reasonably well. Some argue that the volatilities used in option pricing models should be viewed as an estimate of the average long run variability of the underlying parameters. In practice, however, market participants must adjust their behavior and actions based on what they perceive to be those of other participants and balance the elegance of purely mathematical theories against relevance of market realities such as risk limits. Let us not forget that after dropping a lot of money and getting fired for it, there is rarely a chance to prove to the management that, in the long run and on average, our losing trades would have been profitable if only it had been given enough time for the market to turn!

Assumption of normal returns


Another simplifying assumption of continuous time models that manifestly does not hold in practice is that returns are normally distributed 109

even on infinitesimal scale, i.e., price movements during a day. Assets with sufficient liquidity move in small and quantum increments, say one tic. For example, in the case of USD/JPY at each instant the price can either go up or down by that quantum amount of one tic irrespective of whether JPY is trading at 90.50 or 95.50. Consequently, on infinitesimal scale the returns cannot be normally distributed since the size of each move, i.e., one tic, is independent of the price level. On the other hand, the assumption of normal returns on even micro levels that renders such processes infinitely divisible is essential in order to use powerful tools and techniques of Brownian motion and derive elegant mathematical results. Further, there is always a bid-ask spread associated with each trade and this fact is also conveniently glossed over in some mathematical results. The bid-ask spread and minimum size of a move have major practical implications and practitioners are not oblivious to this fact. For example, in dynamic hedging of an option position if rebalancing is followed blindly the way that it is prescribed by theory, it will take the option trader to the poor house and will render dynamic hedging impractical. Practitioners are not oblivious to this fact and refrain from rebalancing their hedges for every minor move of the markets. In practice, after placing the initial hedges on the so-called delta hedging the position subsequent rebalancing is done less frequently and on a portfolio basis in order to benefit from portfolio effect and save the bid-ask spread. Assuming zero bid-ask spread is like driving under the assumption that a car can stop immediately when the brake is pressed. It might be a good approximation but in never holds in practice. This assumption also implies that liquidity providers do this as a public service or an act of charity! Another implication of the assumption of normality of return is the possibility of, though highly improbable, very large daily price moves. In FX markets, daily exchange rates have some rather hard lower bounds based on the size of economies and purchasing power parity which make such large movements impossible. For example, if JPY/USD exchange rate suddenly falls to 20 from 85 it implies that in a short period Japanese economy becomes multiple of that of the U.S. Though this is a theoretical possibility under the assumption of normal returns of assets, in practice it should be treated as a fantasy.

1 0,9 0,8 1 0,7 0,9 0,6 0,8 0,5 0,7 0,4 0,6 0,3 0,5 0,2 0,4 0,1 0,3 0 0,2 0,1 0 0,001 0,101 0,201 0,301 0,401 0,501 0,601 0,701 0,801 0,901 0,001 0,101 0,201 0,301 0,401 0,501 0,601 0,701 0,801 0,901

Figure 2 Q-Q plot of U[0,1] versus actual daily JPY (close-low)/(high-low)


1 0,9 0,8 1 0,7 0,9 0,6 0,8 0,5 0,7 0,4 0,6 0,3 0,5 0,2 0,4 0,1 0,3 0 0,2 0,001 0,1 0 0,001 0,101 0,201 0,301 0,401 0,501 0,601 0,701 0,801 0,901

0,101

0,201

0,301

0,401

0,501

0,601

0,701

0,801

0,901

Figure 3 Q-Q Plot of U[0,1] versus simulated values (close-low)/(high-low)

is the price at instant I, then with probability half the next trade is either at pi + 0.5*(bid-ask) or pi - 0.5*(bid-ask). The support and resistance levels can be modeled as follows. We assume a certain concentration of buy orders, say 20, at a certain level, say 90.50. Note the buy order refers to buying USD, the base currency, and selling JPY, the term currency. This concentration of orders creates a support level for USD at 95.50. This means that if the next trade is a sell order, the price rather than falling to 95.49 will stay at 95.50. However, buy concentration is reduced by 1 unit to 19. If the next trade is again a sell order the concentration is further reduced to 18 and this will continue until all buy orders are taken out and in that case the next sell order will move the market down to 90.49. By adjusting the probability of up or down after the support is breached one can model the sharp drop in prices that usually follow at this time and the next support is where the next buy orders are bunched together. Most of the technical analysis jargon and concepts can be modeled by changing the transition probability and resistance of support and concentration of buy or sell orders, i.e., stop loss or take profit orders.

Markov chain model


An approach that better reflects the daily behavior of FX markets is to divide the trading day into intervals with different intensity of arrival of new trades. This provides the flexibility to model fast markets or slow markets, etc. This division of time can be based on historical data and can accommodate traders view of market liquidity by assigning a bidask spread to each segment. For example, from close of New York to 7 am Tokyo time the bid-ask spread can be eight tics with low intensity of new trades while from 9:00 to 9:15 when economic data is released and 110 trading is very heavy the bid ask spread is only one tic. In this model if pi

The Capco Institute Journal of Financial Transformation


Behavioral Finance and Technical Analysis

Market observations
In this section, we use a simple test based on market observation to determine which of the two models, i.e., Brownian motion or the Markov chain model described in the previous section is a better representation of daily USD/JPY market. Let us consider the daily close of JPY relative to high and low of that day. If we let low = 0 and high = 1, the ratio (closelow)/(high-low) represents the relative position of closing price in the interval [0,1]. The Markov chain model of price movements implies that the above number should have uniform distribution U[0,1]. This follows from the observation that all states of the Markov chains are accessible and recurrent, hence the asymptotic distribution of the states, i.e., that the closing price is uniform. This ratio has been calculated for the past ten years. Figure 2 is the Q-Q plot of the empirical distribution of these ratios against that of U[0,1] implied by Markov chain model. Figure 3 shows a similar plot where simulation is used to generate the data assuming a lognormal distribution of prices. It is clear that U[0,1] is a better fit for the distribution of the above ratios, hence, the Markov chain model seems to be a better model of price movement than continuous time models based on normal distribution of returns.

Conclusion
In daily trading of very liquid assets, such as JPY, one has to be cognizant of notions such as market sentiments, support, resistance, etc. Many of the mathematical models applied in finance ignore these notions and dismiss them despite the fact that all practitioners are acutely aware of them. Continuous time finance also makes certain simplifying assumptions that greatly reduce the applicability of its results to daily trading. This article demonstrates some of these issues and proposes a modified version of the Markov chain that will enable practitioners to model, in a consistent manner, concepts that are used in technical analysis. It also remedies one of the major shortcomings of technical analysis which is its inability to assign probabilities to various outcomes a fundamental ingredient of successful trading.

References
Chung, K. L., 1967, Markov chains, Springer Verlag Freedman, D., 1971, Markov chains, Holden Day Kahnerman, D., P. Slovic, and A. Tversky (eds.), 1982, Judgement under uncertainty: heuristics and biases, Cambridge University Press Thaler, R. H., 1993, Advances in behavioural finance, Russell Sage Foundation Publications Thaler, R. H., 1994, The winners curse: paradoxes and anomalies of economic life, Princeton University Press

111

PART 2

The Failure of Financial Econometrics: Assessing the Cointegration Revolution


Imad Moosa Professor of Finance, School of Economics, Finance and Marketing, RMIT1

Abstract
One aspect of the failure of financial econometrics is the use of cointegration analysis for financial decision making and policy analysis. This paper demonstrates that the results obtained by using different cointegration tests vary considerably and that they are not robust with respect to model specification. It is also demonstrated that, contrary to what is claimed, cointegration analysis does not allow distinction between spurious relations and genuine ones. Some of the pillars of cointegration analysis are not supported by the results presented in this study. Specifically it is shown that
1 I am grateful to the editor of this journal for encouraging me to write this paper and to Kelly Burns for useful comments.

cointegration does not necessarily imply, or is implied by, a valid error correction representation and that causality is not necessarily present in at least one direction. More importantly, however, cointegration analysis does not lead to sound financial decisions, and a better job can be done by using simple correlation analysis.

113

In the second half of the 1980s, specifically following the publication of the seminal paper of Engle and Granger (1987), the world of academic finance and economics experienced a revolution similar to that experienced by the worlds of music and dancing as a result of the introduction of rock and roll and the twist. Engle and Granger formalized their work on cointegration and error correction and subsequently adapted the causality test of Granger (1969) to take into account the possibility of cointegration. The introduction of these techniques has created a thriving industry with a rapidly growing output of papers written by academics testing theories in economics and finance that were previously tested using straightforward regression analysis. Tens of thousands of papers and PhDs later, it is about time to ask whether or not the cointegration revolution has changed our lives and led to discoveries that enhance our understanding of the working of the economy and financial markets, which is presumably the objective of scientific research. One would tend to imagine that, since this work was awarded the Nobel Prize, it must be valued the same way as the discovery of penicillin, which was awarded the same prize. However, it seems to me that while cointegration analysis has provided the means for finance and economics academics to get their promotion and students to obtain their PhDs, the technique has contributed almost nothing to the advancement of knowledge. The objective of this paper is to demonstrate that cointegration analysis, error correction modeling, and causality testing are misleading, confusing, and provide a tool for proving preconceived ideas and beliefs. More important, however, is the hazardous practice of using the results of cointegration analysis to guide policy and financial operations, including investment, financing, and hedging. With the help of examples on stock market integration and international parity conditions it will be demonstrated that cointegration analysis produces results that tell us nothing and that for practical purposes these results are useless at best and dangerous at worst.

cointegrated variables from another pair that are spuriously related is that a linear combination of I(1) series produces another I(1) series except when their underlying long-run movements affect each other so that the residuals obtained from the linear combination is I(0). Conversely, variables that are spuriously related (perhaps due to a dominant time trend) but without the same underlying long-run movement would not produce I(0) residuals. It will be demonstrated later that this proposition is questionable. The basic test for cointegration between xt and yt utilizes the DF statistic, which is the t ratio of in the Dickey-Fuller regression as applied to the residuals of the cointegrating regression (1). The regression used to conduct the residual-based test is specified as t = t-1 + ut (2). Engle and Granger (1991) assert that the t statistic of no longer has the DickeyFuller distribution this is because when the cointegrating parameter () is estimated the residuals t appear slightly more stationary than if the true value of the parameter is used. The distribution of the test statistic is known as the Engle-Granger distribution. To reject the null hypothesis of nonstationarity of the residuals (and therefore non-cointegration), the value of the test statistic must be significantly negative (since a value of zero implies random walk). The critical values of this test statistic are tabulated in Engle and Granger (1987) and Engle and Yoo (1987, 1991), and more precise values are found in MacKinnon (1991). The DickeyFuller regression may be modified to produce the augmented version, which is written as t = t-1 + m i t-i + ut (3).
i=1

The residual-based approach, as proposed by Engle and Granger, has been criticized on the following grounds. First, conflicting results are likely to be obtained from the DF and ADF tests (depending on the lag length), which may be attributed to the low power of these tests in small samples. Second, extending the method to the multivariate case produces weak and biased results [Gonzalo (1994)], and there is no way to tell whether this linear combination is an independent vector or a linear combination of independent vectors. Third, the results are not invariant or robust with respect to the direction of normalization, that is, the choice of the variable on the left-hand side of the cointegrating regression. Dickey et al. (1991) argue that while the test is asymptotically invariant to the direction of normalization, the test results may be very sensitive to it in finite samples. Finally, there is a substantial finite sample bias [for example, Banerjee et al. (1986)], and there is also the problem of implicit common factor restriction [for example, Kremers et al. (1992)]. Apart from that, two serious shortcomings of the residual-based test are (i) the Dickey-Fuller test is based on a simple AR(1) representation, which means that the underlying model is misspecified in the sense that it should contain a moving average component; and (ii) the test is rather weak in distinguishing between unit root and near-unit root processes. In the late 1980s, the Johansen (1988) test for cointegration took the world of academic economics and finance by a storm. This test quickly

Cointegration, error correction, and causality


Cointegration implies that a linear combination of two (or more) variables is stationary although the variables themselves are nonstationary in the sense that they tend to wander around over time. When variables are cointegrated, they are said to be tied up by long-run equilibrium relations. This means that while it is possible to deviate from the long-run condition in the short run, these deviations tend to disappear with the passage of time as a result of the tendency to move back to the equilibrium condition (the phenomenon of mean reversion). A simple two-variable cointegrating regression (normally includes a constant term) may be written as yt = + xt + t (1). For xt and yt to be cointegrated, the necessary condition is that xt ~ I(1) and yt ~ I(1), where114 as the sufficient condition is that t ~ I(0). What distinguishes a pair of

The Capco Institute Journal of Financial Transformation


The Failure of Financial Econometrics: Assessing the Cointegration Revolution

became a crowd pleaser since it allowed anyone to prove anything they wanted to prove: all it takes to obtain the results that you want is a simple modification to the specification of the underlying model. The test is based on a vector autoregressive representation that allows the estimation of the cointegration matrix. Subsequently, two test statistics can be calculated to determine the number of significant cointegrating vectors: the maximal eigenvalue test (Max) and the trace test (Trace). The claim to fame of the Johansen test is that, unlike the Engle-Granger test, it produces results that are invariant with respect to the direction of normalization. This is because all variables are explicitly endogenous, so that there is no need to pick the left-hand side variable in an arbitrary manner. Another perceived advantage of this test is that it provides estimates of all of the cointegrating vectors that exist within a set of variables and offers test statistics for their number. It has also been put forward that (i) the Johansen test fully captures the underlying time series properties of the data; (ii) it is more discerning in its ability to reject a false null hypothesis; (iii) it is based on a fully specified statistical model; and (iv) it has the important advantage of using all of the information in the dataset, thereby increasing estimation efficiency. This seems to be a superb list of credentials for the Johansen test but what about its shortcomings? I have often argued that this test is the biggest scandal in modern econometrics because it can, at the touch of a button, be used to prove the researchers underlying beliefs. This is convenient because the majority of empirical research in economics and finance is directed at proving preconceived ideas and producing good results, rather than to go on a quest for the truth. In this sense, the test is also dangerous, because it can be used to support faulty policy actions and financial decisions. Imagine that you want to prove that privatization is good under any circumstances to please a policymaker who believes that for ideological reasons or that you fancy the proposition that international diversification pays off. Not a problem, you will get the results you want, thanks to the Johansen test. The Johansen test suffers from major problems. One important shortcoming is that it does not allow the identification of separate functional relations in a structural simultaneous equation model [for example Moosa (1994)]. If, by applying the method to a set of variables, two cointegrating vectors are obtained, these vectors cannot be identified as specific structural equations. As a matter of fact, no one knows what the cointegrating vectors are: structural equations, reduced-form equations, or a combination thereof [Wickens (1996)]. Moreover, Reimers (1991) asserts that the test over-rejects the null hypothesis of non-cointegration when it is true, hence providing the ammunition for those wanting to prove preconceived ideas.2 And if that does not work, then all it takes to obtain the desired results is to change the specification of the underlying VAR, for example, by changing the lag structure. Last, but not least, the test

invariably produces implausible point estimates of the coefficients of the cointegrating vectors, hence you may get 178.6 for the estimate of a demand elasticity that is supposed to be around unity. Once any of the tests shows that cointegration is present, the corresponding dynamic relation should be represented by an error correction model, which combines short-term dynamics (as represented by first differences) and deviations from the long-run equilibrium relation (as represented by the error correction term). The error correction model corresponding to (1) is yt = k i yt-i + k
i=1

on the error correction term measures the speed of adjustment towards the long-run relation or the rate at which the deviation is eliminated. For a valid error correction model, the coefficient on the error correction term () must be significantly negative. Grangers representation theorem [Engle and Granger (1987)] states that cointegration implies and is implied by a valid error correction model. With respect to equations (1) and (3), if t ~ I(0), then should be significantly negative and vice versa. This means that it is possible to test for cointegration via an error correction model, in which case the null of no cointegration is H0 : = 0 against the alternative H1 : < 0. More generally, Grangers representation theorem stipulates that cointegration implies the existence of a valid error correction model in a regression of y on x (as in equation 4), or vice versa. Causality testing was popularized by Granger (1969). While the test was initially based on a straight first difference model, the advent of cointegration analysis led to a rethink of causality testing. If the variables are cointegrated then the test should be based on an error correction model because the first difference model would be misspecified. If this is the case, causality should be detected in at least one direction, from x to y or vice versa. The model used to test for causality in the presence of cointegration is yt = k the same as equation (4) except that the contemporaneous term xt is deleted. This is because causality in economics and finance is not really causality (as it is in physics). It is effectively temporal ordering something causes something else because the first something occurs before the other. Consequently, the results of causality testing mean nothing. Furthermore, for x to be judged to affect y, x must be exogenous, which is hardly the case in most application (for example, purchasing power parity). Yet another problem is that the test results are sensitive to the choice of the lag structure (the value of k), which induces scope for manipulating the model to obtain the desired results.
i=1

xt-i + t-i + ut (4), where the coefficient i=0 i

i yt-i + k

i=1

i xt-i + t-i + ut (5), which is

Hjalmarsson and sterholm (2007) use Monte Carlo simulations to show that in a system with near-integrated variables, the probability of reaching an erroneous conclusion regarding the cointegrating rank of the system is generally substantially higher than the nominal size, which means that the risk of concluding that completely unrelated series are cointegrated is therefore non-negligible.

115

Applications in economics and finance


Cointegration analysis has been used extensively in economics and finance. One of the applications in finance is an investment strategy known as pairs trading. The underlying idea is that the spread between (or ratio of) two cointegrated stock prices may widen temporarily, which provides the opportunity to go short on one of the stocks and long on the other, then exiting the two positions when the spread is back where it should be, in other words, when the spread has gone through mean reversion. Hence what matters is cointegration, not correlation, because cointegration implies mean reversion. Schmidt (2008) uses the Johansen test to detect stock pairs for this strategy as applied to some stocks listed on the ASX200 (Australian Stock Exchange). Based on the results, it is stated that two cointegrated stocks can be combined in a certain linear combination so that the dynamics of the resulting portfolio are governed by a stationary process. Without experimenting with a trading rule and based on plots of the residual series (showing a high rate of zero crossings and large deviations around the mean), she concludes that this strategy would likely be profitable. I would imagine that it is typical to find that those suggesting these investment strategies are not willing to bet their own money on the predictions of cointegration-based tests. Alexander (1999) explains how cointegration can be used for the purpose of hedging, arguing that hedging methodologies based on cointegrated financial assets may be more effective in the long term and that investment management strategies that are based only on volatility and correlation of returns cannot guarantee long term performance. She suggests that since high correlation alone is not sufficient to ensure the long term performance of hedges, there is a need to augment standard risk-return modeling methodologies to take account of common longterm trends in prices, which is exactly what cointegration provides. But it has been demonstrated that accounting for cointegration makes no difference whatsoever for the estimation of the hedge ratio and hedging effectiveness. For example, it has been suggested by Lien (1996) that if the price of the unhedged position and that of the hedging instrument are cointegrated the position will be under-hedged if the hedge ratio is estimated from a first difference model. This follows from Grangers representation theorem, which implies that if the prices are cointegrated the first difference model will be misspecified because it ignores cointegration. However, it has been found that using an error correction model to estimate the hedge ratio does not make the hedge more effective [Moosa (2011a)]. In other words, the theoretically sound results of Lien (1996) have negligible empirical significance or ramifications. Cointegration and causality have been used to investigate market interdependence and integration, inter alia, by Taylor and Tonks (1989), Mathur and Subrahmanyam (1990), Eun and Shin (1989), and Malliaris and Urrutia (1992). For example, Taylor and Tonks (1989) used cointe116 gration analysis to examine the effect of the 1979 abolition of the U.K.

exchange controls on the degree of integration between the British market and other markets (Germany, the Netherlands, Japan, and the U.S.). The results show that U.K. stock prices became cointegrated with prices in other markets in the post-1979 period, which reduced the scope for diversification. Mathur and Subrahamanyan (1990) used causality testing to find out if the Nordic markets (of Denmark, Norway, Finland, and Sweden) are integrated with that of the U.S. The results reveal that the U.S. market affects the Danish market only and that the Norwegian, Danish, and Finnish markets do not affect any of the other markets (naturally, no explanation is suggested for differences in the results).3 Conversely, there was much more uniformity in the results obtained by Eun and Shin (1989), who estimated a nine-market VAR. They detected a considerable amount of multilateral interaction and significant effect of U.S. innovations on other markets. However, they also found that no single foreign market could adequately explain the U.S. market movements. Finally, Malliaris and Urrutia (1992) investigated the lead-lag relations among six markets in diverse time zones for the period before, during, and after the October 1987 crash. They concluded that the crash was probably an international crisis and that it might have begun simultaneously in all national stock markets. The implication of these results is that international diversification will not work if long positions are taken on a group of integrated markets, as shown by the results of cointegration. In international finance, one of the most popular applications of cointegration has been the testing of international parity conditions, starting with the testing of purchasing power parity (PPP), which was initiated by Taylor (1988) and Enders (1988). The production of papers testing PPP by cointegration is yet to come to an end, but we are not better off with respect to our understanding of PPP. This hypothesis works well over very long periods of time and under hyperinflation. The use of cointegration analysis to test PPP, particularly with the Johansen test, gives a false indication that PPP works over a short period of time. But to any observer, exchange rates are too volatile to be explained by the smooth price movements over time. Cointegration has also been used to test covered and uncovered interest parity, CIP and UIP, respectively. In particular, I have been puzzled by attempts to use cointegration to test CIP, because this condition must hold by definition, as an arbitrage or a hedging condition [Moosa (2004)]. Strangely, cointegration tests may tell us that CIP does not work, as we are going to see. Cointegration analysis is also used for policy making. For example, Drehmann et al. (2010) use the Johnansen test to derive results to design a

It is invariably the case that when several countries, currencies, or whatever are examined using cointegration analysis, the results turn out to be all over the place. Typically, the results would show that A and B are cointegrated but A and C are not, and no one knows why because no one presents an explanation why.

The Capco Institute Journal of Financial Transformation


The Failure of Financial Econometrics: Assessing the Cointegration Revolution

program for countercyclical capital buffers, which has been introduced as part of the so-called Basel III provisions [Moosa (2011b)]. The Johansen test is also used by Ericsson et al. (1998) to estimate an econometric model for the U.K. money, output, prices, and interest rates. This notorious test is used to judge exchange rate misalignment by estimating a cointegrating vector relating the exchange rate to its determining factors. It is this kind of work that has led to the conclusion that the Chinese yuan is undervalued against the dollar, which is the pretext for a potential fullscale trade war between the U.S. and China [Moosa (2011c, 2011d)]. We are talking about some serious business here, too serious to be sorted out on the basis of the fragile and misleading cointegration tests.
80 70 60 50 40 30 20 10 0 0 10 20

Highly correlated variables

30

40

50

60

x1

x2

Correlation and cointegration


Applications of cointegration in economics and finance are based on two propositions pertaining to the difference between cointegration and correlation. The first proposition is that cointegration tells you a different story from that told by correlation, which is the basis of pairs trading and the hedging argument. The second is that cointegration enables us to distinguish a spurious relation between two highly correlated variables from a genuine one. Hence it is suggested that only the variables that are genuinely related exhibit cointegration. The problem is that no one has shown how any cointegration test can do this sort of forensic investigation. The empirical evidence does not support this contention either.

Moderately correlated variables 200

150

100

50

10

20

30

40

50

60

-50

x3

x4

Figure 1 Cointegration and correlation

The proposition on the distinction between correlation and cointegration is not very well thought of. For example, Chan (2006) distinguishes between the two concepts by using the co-movements of two (theoretically constructed) stock prices. He argues that the two prices are correlated if they rise and fall in synchrony, whereas they are cointegrated if they do not wander off in opposite directions for very long without coming back to a mean distance eventually. By synchrony, it is meant that prices rise and fall together on a daily, weekly, or monthly basis. It follows, therefore, that the spread between two cointegrated prices is mean reverting while the spread between two perfectly correlated prices is constant. But this is misleading because synchrony implies perfect correlation, which can never be the case in practice. This also means that perfectly or highly correlated prices are necessarily cointegrated because a constant spread is by definition stationary. Alexander (1999) argues that high correlation does not necessarily imply high cointegration in prices. She shows a graph of the German mark and Dutch guilder over the period 1975-85, arguing that they appear to be cointegrated. Then she adds a very small daily incremental return to the guilder, suggesting that the series are still highly correlated but not cointegrated. However, this addition of incremental return affects both cointegration and correlation, to the extent that the more the two variables drift apart the less correlated they become. Does this also mean that variables that are negatively correlated cannot be cointegrated as Chan (2006) argued explicitly (wander off in opposite directions)? This is an issue that we will explore later. I will argue here that cointegration analysis is not capable of distinguishing between a spurious relation and a genuine (causal) one. For this purpose, I present two examples, one is based on artificially generated time series (hence they can only be spuriously related, if at all) and another that is based on actual data on U.S. debt. Starting with the artificial data, four time series variables are generated over 51 points in time ranging between t=0 and t=50. The four time series (x1, x2, x3, and x4) are generated 117
Source: Federal Reserve
18000 16000 14000 12000 10000 8000 6000 4000 2000 0

1945

1950

1955

1960

1965

1970

1975

1980

1985

1990

1995

2000

2005

2010

x1

x2

x3

x4

x5

Figure 2 Correlation and cointegration of measures of U.S. indebtedness

x1 x1 x2 1.00 0.89 -1.81 x3 0.81 -2.23 x4 0.92 -2.33 x5 0.81 -1.54 * Significant at the 5% level

x2

x3

x4

x5

in opposite directions. Consider two time series variables, x5t and x1t, which are generated as follows from the variable x1t as in equation (6): x5t = -0.5x1t + 5t; x5t = 50 + x5t; x1t = 50 + x1t (8); where 5t is a random variable falling in the range (-2,2). x5t and x1t are negatively correlated by construction, which is obvious from Figure 3 (the correlation coefficient is -0.99). The two variables drift apart, hence they cannot be cointegrated. Yet, by running a cointegrating regression of x1t on x5t and applying the Dickey-Fuller test to the residuals of the regression, the

1.00 0.97 -4.06* 0.99 -4.06* 0.97 -2.28 0.96 -3.22 0.98 -4.39* 0.95 -0.62 1.00 1.00 1.00

results show that ADF=-5.59, which is statistically significant. Hence, x5t and x1t, two variables that drift apart from each other, are cointegrated. It is interesting, however, that the logs of these variables are not cointegrated (ADF=-2.81).4

Table 1 Correlation coefficients (top) and ADF statistics (bottom) for measures of U.S. indebtedness

An illustration: stock market integration


The idea behind testing for cointegration between the stock prices of as follows: x1t = 10 + 1.2t + 1t; x2t = 10 + 0.8t + 2t; x3t = 10 + 4t + 3t; x4t = 10 + tt + 4t (6), where 1, 2, 3, 4 and are random variables falling in the ranges (-1,1), (-2,2), (-25,25), (-10,10), and (0,2) respectively. By design, x1 and x2 are highly correlated (correlation = 0.99) whereas x3 and x4 are not (correlation is 0.51). Figure 1 shows how the four variables move over time. Since they are not genuinely related by design we should obtain a finding of no cointegration between x1 and x2 and between x3 and x4. However, that turned out not to be the case: x1 and x2 are cointegrated (ADF = -5.01) but x3 and x4 are not (ADF = -2.50). The same argument can be illustrated with the use of actual data on five variables representing measures of U.S. indebtedness over the period 1946-2010 as displayed in Figure 2: x1 is the liabilities of non-financial business, x2 is the liabilities of financial institutions, x3 is the U.S. liabilities to the rest of the world, x4 is total foreign assets, and x5 is GDP. Table 1 reports correlations and the ADF statistics of the residuals of the cointegrating regressions for each possible pair of the five variables. While there is no exact correspondence between correlation and the finding of cointegration, the latter is found in three cases where correlation is at least 0.97. But why would the U.S. liabilities to the rest of the world have a genuine relation with GDP but not the liabilities of financial institutions and total foreign assets? In general, what makes the relations (x2, x3), (x3, x4), and (x3, x5) genuine whereas the other relations are not? Where is the theory or the intuition that explains these findings? A genuine relation must be formally or intuitively explainable, not in this example though. Consequently, a finding of cointegration does not necessarily imply a genuine relation in the sense that the genuineness of the relation can be justified theoretically or intuitively. The distinction between cointegration and correlation as described by Chan (2006) and Alexander (1999) seems to suggest that two negatively correlated variables cannot be cointegrated because they do not 118 move in synchrony, they drift apart from each other, and they move
5 4 The use of logs in cointegration analysis is rather arbitrary. A typical justification for the use of logs is that the first log difference is more likely to be stationary than the first absolute difference. Hence logs are used for convenience, not for a theoretical or empirical reason. The problem is that the use or otherwise of logs may introduce significant qualitative differences in the results, as we are going to see later. The source of the data is the IMFs International Financial Statistics.

two countries is simple. If cointegration is established, then the stock markets of the two countries are said to be integrated, which means that international diversification involving these two markets is not effective in the sense that it does not reduce risk. Two flaws can be detected in this line of reasoning. First, the underlying assumption is that stock prices are positively correlated, which means that taking long positions on the two markets will not produce risk reduction. However, even if this is the case diversification can be implemented by taking a short position on one market and a long position on the other. The second flaw is that the effectiveness of hedging depends on correlation, not on cointegration, which is contrary to the argument put forward by Alexander (1999). This illustration is based on quarterly stock price data from a diverse set of countries over the period 2001-2010.5 These countries are the U.S., Japan (JP), U.K., Canada (CA), Australia (AU), New Zealand (NZ), Switzerland (SW), Singapore (SN), Korea (KO), India (IN), South Africa (SA), Brazil (BR), Turkey (TR), and Kuwait (KW). We will test the relation between each one of these markets and that of the U.S. by subjecting the stock price time series to the following tests: the residual-based test (ADF), the Johansen test with two different lag lengths, the test based on the error correction term, and causality testing from the U.S. market to another market, and vice versa. Following normal practice logarithmic prices are used. The results are reported in Table 2. Judged by the ADF statistic and the Johansen test (with four lags), none of the markets is integrated with that of the U.S., a result that may be disappointing for those who want to show that markets should be integrated

The Capco Institute Journal of Financial Transformation


The Failure of Financial Econometrics: Assessing the Cointegration Revolution

Test statistic ADF Johansen (4 lags) max Johansen (4 lags) trace Johansen (12 lags) max Johansen (12 lags) trace Error correction t() US XX [2(4)] XX US [2(4)]

JP -1.36 13.60 14.68 73.25* 79.15* -0.25 6.49 3.69

UK -2.72 10.6 14.93 57.01* 15.44* -2.08* 1.51 1.61

AU -1.47 7.14 8.21 104.20* 157.62* -1.09 3.21 5.18

CA -2.11 9.63 11.01 117.22* 120.36* -0.36 2.20 6.63

NZ -2.69 10.38 14.79 136.41* 141.20* -2.15* 7.93 0.89

SW -2.93 12.28 15.27 161.67* 172* -1.14 5.49 3.89

SN -0.85 8.04 8.15 133.59* 141.40* 0.39 5.23 22.98*

KO -0.49 4.94 5.72 97.75* 97.57* -1.42 3.42 4.91

IN -0.82 6.85 7.03 80.63* 130.88* -0.56 0.27 3.02

SA -2.12 13.64 16.57 58.26* 66.61* -6.36* 17.55* 1.60

BR -0.91 7.58 8.07 91.58* 92.10* -0.64 1.81 4.51

TR -0.93 8.62 8.87 74.04* 75.15* -0.04 0.86 0.97

KW -2.86 11.63 18.05* 44.16* 44.17* -3.17* 4.02 7.30

2 *Significant at the 5% level. The 5% critical values are: ADF (-3.52), max (14.88), trace (17.86), error correction (-2), (4) (9.48).

Table 2 Testing the hypothesis of market integration

120

produces interesting results. Specifically, we want to examine the validity of the propositions that (i) cointegration implies and is implied by the validity of the corresponding error correction model; and (ii) cointegration implies that causality should run in at least one direction. For this purpose we will use the ADF test results, since the error correction terms are taken to be the lagged residuals of the bivariate cointegration regressions. The results tell us that there is no cointegration in any case, but the coefficient on the error correction term is significant in four cases, when Grangers representation theorem stipulates that it should be significant
0 10 20 30 40 50

100

80

60

40

20

in none. The cointegrating regressions were run both ways, but that did not change the results. As far as causality is concerned, the results show that the Singapore stock prices, which are not cointegrated with U.S. stock prices, are causally related to them which is fine, because cointegration does not necessarily preclude causation. The problem is how to explain why, out of all of these markets, only the Singapore market has an effect on the U.S. market. The only other case of causality is that the U.S. market has an effect on that of South Africa but not on any of the others (again, why South Africa?). Out of the four cases that show cointegration, the only case that produces unidirectional causality is that of South Africa. So much for the implications of cointegration on causality. Now we examine the practical significance of these results in terms of the benefits of international diversification. For this purpose we construct portfolios consisting of a position on the U.S. market and an opposite position (of the same size) on one of the other markets. If the hedge is effective then the variance of the rate of return on U.S. stocks should be significantly higher than the variance of the rate of return on the portfolio. For this purpose we conduct a variance ratio test, which requires the calculation of the variance ratio: VR = 2(RUS)/ 2(RP) (8), where 2(RUS) is the variance of the rate of return on U.S. stocks (the first log difference of the stock price index) and 2(RP) is the variance of the rate of return

x'1

x'5

Source: Federal Reserve

Figure 3 Cointegration of negatively-correlated variables

in the age of globalization. But this is not a problem. Just change the model specification in the magical Johansen procedure to 12, then all of the markets become integrated. The test based on the error correction term shows that only four markets are integrated with that of the U.S.: U.K., New Zealand, South Africa, and Kuwait. So, what are we supposed to believe? The tendency would be to report the results that support a preconceived idea. If I accepted the proposition that cointegration has implications for the benefits or otherwise of international diversification, and if I wanted to prove a preconceived idea, I would do the following. I would report the results of the ADF test if I thought that there was scope for international diversification, but I would report the results of the Johansen test with lag 12 if I thought that international diversification was not effective. If I held the view of sometimes/sometimes not and perhaps/ perhaps not, I would report the results based on the error correction test. But then how would I explain the finding that the markets of Kuwait, South Africa and New Zealand are integrated with the U.S. but those of Japan, Canada, and Singapore are not? It is simply embarrassing and
6

even hazardous to derive inference from any set of these results.


6

Checking the validity of some predictions of the theory of cointegration

It is interesting to note that rising (and falling) oil prices should lead the Kuwait and U.S. markets to drift away from each other.

119

on the portfolio. For simplicity, we assume an equally-weighted portfolio, hence: RP = 0.5(RUS RXX) = 0.5(pUS pXX) (9), where RXX is the rate of return on the stocks of the other country.7 Hedging is effective (that is, international diversification is useful) if VR > F(n-1, n-1) (10), where n is the sample size. The VR test can be supplemented by a measure of variance reduction (VD), which is calculated as: VD = 1 1/VR (11). The results are presented in Table 3, which reports the variance ratio, variance reduction, and the correlation between U.S. returns and returns on the stock market in the other country. We can see that hedging is effective (international diversification is useful) in two cases only: the U.K. and Canada, one showing cointegration while the other is not, according to the error correction test. Why these two markets? Because they exhibit the highest correlation with returns on the U.S. market. Consequently, a simple concept like correlation leads to the right inference but the sophisticated tests of cointegration produce messy results that may lead to faulty financial decisions. The claim that what matters for hedging is cointegration, not correlation, is unfounded.

Market JP UK# CA AU NZ# SW SN KO IN SA# BR TR KW#

VR 1.11 6.30* 3.28* 1.52 2.45 1.33 0.74 1.00 0.38 0.42 0.69 0.47 0.36

VD 10.15 84.13 69.53 34.39 59.22 24.54 -34.58 0.05 -160.28 -137.65 -44.58 -111.73 -177.87

Correlation 0.74 0.92 0.89 0.73 0.77 0.76 0.76 0.78 0.66 0.49 0.82 0.75 0.56

# Markets cointegrated with the U.S. * Significant at the 5% level (critical value is 2.68).

Table 3 Results of the variance ratio test

Another illustration: international parity conditions


In this section we illustrate how misleading cointegration analysis can be by testing two international parity conditions: purchasing power parity and covered interest parity. For this purpose, we employ the residualbased test and the error correction test, but not the Johansen test because we have established that it is unreliable (actually, highly reliable if you want to produce desirable results). PPP is tested using two specifications: the restricted specification in which the exchange rate is a function of the price ratio, and the unrestricted specification in which the exchange rate is a function of the price indices in the two countries. The two specifications are written in logarithmic form respectively as: st = 0 + 1(pxx, t pUS, t) + t (12) and st = 0 + 1pxx, t + 2pUS, t) + t (13), where s is the (log of) exchange rate, pUS is the (log of) price level in the U.S. and pxx is the (log of) price level in the other country.8 The results of testing PPP for 13 countries against the U.S., using quarterly data covering the period 2001Q1-2010Q3, are reported in Table 4. The results show that when PPP is tested by applying the Dickey-Fuller test to the residuals of the cointegrating regression, evidence for cointegration is apparent between the U.S. and South Africa when the unrestricted specification of the model is used. What is so special about South Africa to be the only country for which PPP is valid against the U.S.? When the error correction test is used, most cases exhibit cointegration, which casts doubt on the validity of Grangers representation theorem. But then in some cases cointegration is present when the restricted specification is used but not when the unrestricted specification 120 is used, which is a contradiction. If PPP is valid, as shown by testing the
8 7 JP UK CA AU NZ SW SN KO IN SA BR TR KW

Restricted (ADF) -1.43 -2.45 -2.23 -2.53 -2.06 -2.71 -0.10 -1.76 -1.83 -2.32 -1.94 -2.59 -2.74

Unrestricted (ADF) -1.90 -2.81 -2.17 -2.78 -2.58 -2.99 -3.64 -2.13 -2.09 -4.61* -2.89 -2.53 -2.81

Restricted (EC) -1.36 -2.36* -1.90 -2.25* -2.11* -2.61* -0.67 -2.07* -2.89* -2.92* -5.04* -3.77* -2.79*

Unrestricted (EC) -1.54 -2.39* -1.38 -2.07* -2.06* -2.95* 0.00 -0.97 -1.93 -3.93* -3.42* -3.51* -2.57*

* Significant at the 5% level (critical value of the ADF is -3.52 for the restricted specification and -4.01 for the unrestricted specification). The critical value for the EC test is -2.

Table 4 Testing PPP

restricted specification, then the restriction (resulting from the imposition of the condition of symmetry) must be valid. If this is so, then the corresponding unrestricted version must also be valid, but this is not so in all cases.

Moosa and Al-Deehani (2009) show how to calculate the portfolio weights that minimize the variance of the rate of return on the portfolio. The use of log-linear specification is appropriate in this case because the raw specification of PPP is that the exchange rate is a function of the price ratio. Hence the specification is justified by theoretical reasoning.

The Capco Institute Journal of Financial Transformation


The Failure of Financial Econometrics: Assessing the Cointegration Revolution

ADF JP UK CA AU NZ SW SN KO IN SA BR TR KW -1.96 -2.34 -2.47 -2.14 -2.86 -2.38 -2.02 -3.27 -2.80 -3.17 -1.48 -3.24 -2.07

EC -3.15* -1.76 -2.58* -1.65 -2.09* -1.15 -0.58 -1.25 -1.69 -1.60 -1.88 -2.51* -0.10

Conclusion
It seems that the cointegration revolution was not a revolution at all. It was another econometric trick that does not help our understanding of how the economy and financial system work. On the contrary, cointegration analysis may produce results that are highly misleading (for example, CIP does not hold, hence free money is available for no risk). One can only wonder why this gimmick is considered as important for our lives as the discovery of Penicillin (at least in the eyes of the Nobel Prize Committee). The problems associated with cointegration analysis are plentiful. To start with, the results obtained by using different tests typically vary considerably and they are not robust with respect to model specification (for example, linear versus log linear, restricted versus unrestricted, changing the lag structure, changing the direction of normalization, and the addition or deletion of a time trend). Hence, the technique offers the opportunity for anyone to prove whatever they like. This can be very dangerous if the results are used for policy formulation or for financial decisions. The claim to fame of cointegration analysis that it can distinguish spurious relations from genuine ones is false. We can only do that by common sense, theory, and/or intuition. Furthermore, some of the pillars of cointegration analysis are not supported by the results presented in this study: cointegration does not necessarily imply and is implied by a valid error correction representation, and it does not necessarily imply that causality must be present in at least one direction. In some cases simple correlation analysis does a better job than cointegration testing. Cointegration analysis, like many of the techniques of financial econometrics, is not worthy of the brain power spent on its development. While it has provided the means for finance and economics academics to publish papers and for students in the two disciplines to obtain their PhDs, the technique has not provided any useful insights. On the contrary, it typically provides faulty, inconsistent, and robustness-lacking results that may be hazardous to use. I would certainly advocate the use of a warning phrase such as handle with care to describe results based on cointegration testing. It is yet another manifestation of the failure of financial econometrics.

* Significant at the 5% level (critical value is -1.96)

Table 5 Testing CIP

Covered interest parity is a rather peculiar case. One way to test CIP is to find out if the actual and forward exchange rates are cointegrated. However, CIP must hold as a truism because a bank will never quote a forward rate that is different from that compatible with CIP, the so-called interest parity forward rate. This is because this is the only rate that precludes the possibility of profitable risk-free arbitrage (hence CIP is a no-arbitrage condition) and it is the only rate that enables the quoting bank to hedge its exposure perfectly (hence CIP as a hedging condition).9 The interest parity forward rate is calculated as Ft = St [(1 + iXX)/(1 + iUS)] (14), where F is the forward rate, S is the spot rate, iXX is the interest rate in the other country and iUS is the interest rate in the U.S. Although CIP must hold by definition and design, cointegration tests may reveal that it does not hold. Consider the cointegrating regression: t = + st + t (15), where lowercase letters imply logarithms. Cointegration between t and st requires that t ~ I(0). The results, presented in Table 5, show that cointegration is present in four cases only when the error correction test is used and in no case when the ADF test is used. A plot of the spot versus forward rates shows similar close relations irrespective of whether or not cointegration is present. The spot and forward rates do not drift apart from each other, which makes one wonder why they are not cointegrated. But irrespective of the results of cointegration testing, CIP must always hold in the sense that a bank can only quote the forward rate implied by equation (14). There cannot be any deviations from this condition because deviation implies the availability of riskless arbitrage profit. Cointegration results have no meaning whatsoever in the case of CIP, and the hundreds of tests that have been done to find out whether or not CIP holds are a total waste of time.

See Moosa (2004) for a distinction between CIP as an arbitrage and a hedging condition.

121

References

Alexander, C., 1999, Optimal hedging using cointegration, Philosophical Transactions of the Royal Society, Series A 357, 2039-2085 Banerjee, A., J. J. Dolado, D. F. Hendry, and G. W. Smith, 1986, Exploring equilibrium relationships in econometrics through static models: some Monte Carlo evidence, Oxford Bulletin of Economics and Statistics, 48, 253-77 Chan, E., 2006, Cointegration is not the same as correlation, TradingMarkets.Com, 13 November Dickey, D. A., D. W. Jansen, and D. L. Thornton, 1991, A primer on cointegration with an application to money and income, Federal Reserve Bank of St. Louis Economic Review, March/April, 58-78 Drehmann, M., C. Borio, L. Gambacorta, G. Jimenez, and C. Trucharte, 2010, Countercyclical capital buffers: exploring options, BIS Working Papers, No 317 Enders, W., 1988, ARIMA and cointegration tests of PPP under fixed and flexible exchange rate regimes, Review of Economics and Statistics, 70, 504-508 Engle, R. F. and C. W. J. Granger, 1987, Cointegration and error correction: representation, estimation and testing, Econometrica, 55, 251-76 Engle, R. F. and C. W. J. Granger (eds.), 1991, Long-run economic relationships: readings in cointegration, Oxford University Press Engle, R. F. and B. S. Yoo, 1987, Forecasting and testing in cointegrated systems, Journal of Econometrics, 35, 143-59 Engle, R.F. and B. S. Yoo, 1991, Cointegrated economic time series: an overview with new results, in Engle, R. F. and C. W. J. Granger (eds.), 1991, Long-run economic relationships: readings in cointegration, Oxford University Press Ericsson, N. R., D. F. Hendry, and G. E. Mizon, 1998, Exogeniety, cointegration and economic policy analysis, Federal Reserve System, International Finance Discussion Papers, No 616, June Eun, C. S. and S. Shin, 1989, International transmission of stock market movements, Journal of Financial and Quantitative Analysis, 24, 41-56 Gonzalo, J., 1994, Comparison of five alternative methods of estimating long-run equilibrium relationships, Journal of Econometrics, 60, 203-33 Granger, C. W. J., 1969, Investigating causal relations by econometric models and crossspectral methods, Econometrica, 37, 424-438 Hjalmarsson, E. and P. sterholm, 2007, Testing for cointegration using the Johansen methodology when variables are near-integrated, IMF Working Papers, June Johansen, S., 1988, Statistical analysis of cointegrating vectors, Journal of Economic Dynamics and Control, 12, 231-54

Kremers, J. J. M., N. R. Ericsson, and J. J. Dolado, 1992, The power of cointegration tests, Oxford Bulletin of Economics and Statistics, 54, 325-48 Lien, D., 1996, The effect of cointegration relationship on futures hedging: a note, Journal of Futures Markets, 16:7, 773-780 MacKinnon, J. G., 1991, Critical values for cointegration tests, in Engle, R. F., and C. W. J. Granger (eds.), 1991, Long-run economic relationships: readings in cointegration, Oxford University Press Malliaris, A. G., and J. L. Urrutia, 1992, The international crash of October 1987: causality tests, Journal of Financial and Quantitative Analysis, 27, 353-364 Mathur, T., and V. Subrahmanyam, 1990, Interdependencies among the Nordic and US Stock Markets, Scandinavian Journal of Economics, 92, 587-597 Moosa, I. A., 1994, The monetary model of exchange rates revisited, Applied Financial Economics, 4, 279-87 Moosa, I. A., 2004, Is covered interest parity an arbitrage or a hedging condition? Economia Internazionale, 57, 189-194 Moosa, I. A., 2011a, The failure of financial econometrics: estimation of the hedge ratio as an illustration, Journal of Financial Transformation, 31, 67-72 Moosa, I. A., 2011b, Basel II to Basel III: a great leap forward? in La Brosse, J. R., R. OlivaresCaminal, and D. Singh (eds), Managing risk in the financial system, Edward Elgar Moosa, I. A., 2011c, Undermining the case for a trade war between the U.S. and China, Economia Internazionale, forthcoming Moosa, I. A., 2011d, On the U.S.-Chinese trade dispute, Journal of Post Keynesian Economics, forthcoming Moosa, I. A. and T. Al-Deehani, 2009, The myth of international diversification, Economia Internazionale, 62, 83-406 Reimers, H. E., 1991, Comparison of tests for multivariate cointegration, Christian-Alberchts University, Discussion Paper No. 58 Schmidt, A. D., 2008, Pairs trading: a cointegration approach, Honours Thesis, University of Sydney Taylor, M. P., 1988, An empirical examination of long-run purchasing power parity using cointegration techniques, Applied Economics, 20, 1369-1381 Taylor, M. P. and I. Tonks, 1989, The internationalisation of stock markets and the abolition of UK exchange control, Review of Economics and Statistics, 71, 332-336 Wickens, M. R., 1996, Interpreting cointegrating vectors and common stochastic trends, Journal of Econometrics, 74, 255-271

122

PART 2

A General Structural Approach For Credit Modeling Under Stochastic Volatility


Marcos Escobar Associate Professor, Ryerson University Tim Friederich PhD candidate, HVB Institute for Mathematical Finance, Technische Universitt Mnchen, and a Financial Engineer, Risklab GmbH Luis Seco Director, RiskLab, and Professor, University of Toronto Rudi Zagst Director, HVB Institute for Mathematical Finance, and Professor, Technische
Universitt Mnchen

Abstract
This paper assumes a structural credit model with underlying stochastic volatility combining the Black/Cox approach with the Heston model. We model the equity of a company as a barrier call option on its assets. The assets are assumed to follow a stochastic volatility process; this implies an equity model with most documented stylized facts incorporated. We derive the price of this option under a general framework where the barrier and strike are different from each other, allowing for richer financial applications. The expression for the probability of default under this framework is also provided. As the calibration of this model gets much more complex, we present an iterative fitting algorithm with which we are able to nicely estimate the parameters of the model, and we show via simulation the consistency of the estimator. We also study the sensitivity of the model parameters to the difference between the barrier and strike price. 123

When we aim to characterize the performance of a stock, the stock price is mainly the result of the behavior of the companys assets and its liabilities. Yet, the evolution of assets and liabilities is usually not reported on a daily basis. For this reason, we modeled a companys equity as a barrier call option on the assets, with the liabilities as barrier and strike price. The value of the equity is therefore given by the price of the down-and-out call option [Escobar et al. (2010)]. In this manner, we can calculate the asset price time series from the given equity price time series by inverting the option pricing formula, in particular, the asset price volatility becomes the volatility implied from the option price. This model is mainly based on the foundations of structural credit models laid by Merton (1973) and Black and Cox (1976). We combined this interpretation of the equity price as a call option with the Heston model [see Heston (1993)] by modeling the asset of the company as a stochastic volatility process. We also derived estimators for the calibration of the model, inspired by the work of Genon-Catalot et al. (2000). In Escobar et al. (2010), we assumed the knock-out barrier and strike price to be equal and found the option pricing formula to be of a very simple form with a straightforward inverse formula, which allows us to calculate the asset price if the value of the option is known. In this paper, we do not make this assumption anymore and derive the price of the equity in the form of a general barrier call option. A strike price D+K with K>0 can be economically interpreted in various ways. For example, the additional costs of fulfilling the option contract at maturity. These costs are not involved when selling the option before maturity. Such costs are likely for over-the-counter (OTC) options and common for customized options. As a practical example, for OTC options on commodities or other physical goods, the transportation or storage costs incurred when the option is exercised might affect the price someone is willing to pay for such an option. An alternative explanation of D and K is to assume that total liabilities consist of the debt D and an additional debt K granted to the cash account. The credit limit on the cash account does not require any backing by assets but is granted for the liquidity of daily business cash flows on the companys cash account. However, the company only defaults if the assets fall below D. In other words, the debt on the cash account is not considered for determining the case of a default, but only the true debt D. Another interpretation comes from taking D+K1 as the actual debt with K1 < K so that K1 and T are the maximum amount and time that equity holders (EH) and debt holders (DH) are willing to wait to avoid bankruptcy. Default would mean a total loss for EH as they are the first to be left out, so allowing the asset to wander in [D, D+K1] is beneficial to them. On the other hand, DH are usually willing to accept this risk for some extra return K - K1 on the money lent. 124

The model presented for the equity not only has more stylized facts described in the literature, see Cont (2001), like for example stochastic volatility and correlation between the volatility and the equity, but also allows for estimation which is a widely underestimated problem in the literature. A key challenge with calibrating the model parameters is that the parameters are needed two steps: for the calculation of the inverse of the option price, and for fitting the parameters to the asset process. This is a very common estimation problem in finance, as it appears, for example, when estimating the parameters of the intensity in a reduced form model using credit prices or the parameters of the instantaneous interest rate from bond prices, or simply the implied volatility from option prices. In this paper, we present an iterative fitting method, which dramatically decreases the computation time and makes the more complex setting in this paper manageable. We furthermore conduct tests to examine numerically the quality of the fitting, and with a case study examine the sensitivities of the model parameters to the difference between the barrier and the strike price for which we allow in this paper.

The stochastic volatility model


As asset returns are, in general, not normally distributed, applying a Black/Scholes model assuming geometric Brownian motions for the performance of asset prices is a significant simplification. In particular, in falling stock markets, Engle (1982) and Heston (1993) find that volatility increases. To account for this so-called heteroscedasticity, we use a model incorporating stochastic volatility. Let (, F, F, Q) be a filtered probability space with filtration F = {Ft}t>0. The underlying asset process A and the variance v of that process are expressed through the following SDEs: dA(t) = A(t)dt + A(t)dZ(t) dZvt (1); (2) dv(t) = v[v - v(t)]dt + v cess with volatility A =

where A is the underlying asset process, v is the variance of asset pro, is drift of the assets, Z, Zv are two independent Wiener processes in the probability space (, F, Q), v is the long-term value of volatility, v is the volatility of the variance process, and v is the mean-reversion speed. The parameters v, v, and v have to fulfill the following condition: v v > 2. This condition, which is also called the Feller condition [Feller v (1951)], guarantees that the variance process is always greater than zero. Furthermore, claiming v > 0 ensures strict stationarity and -mixing [see Genon-Catalot et al. (2000)]. The debt is assumed to be exponentially growing with the risk-free rate r: D(t) = D(0) = D(T) . The model defined by (1) and (2) is a so-called stochastic volatility model and is the same model setup we used in Escobar et al. (2010). For a discussion of this model, we refer the interested reader to Heston (1993). Due to the fact that it was introduced by this author, this model is also referred to as the Heston model.

The Capco Institute Journal of Financial Transformation


A General Structural Approach For Credit Modeling Under Stochastic Volatility

Pricing of barrier options


In the structural credit model, the value of the company is interpreted as a call option on its assets with its liabilities as strike price. As the company can default over time, a barrier is introduced. Thus, if the value of the assets falls below the barrier at any point in time, the option expires worthless. As opposed to Escobar et al. (2010), where both the barrier and strike price of the option were assumed to be equal, in the more general setting which we examine in this study, they can assume different values. Figure 1 displays three possible paths with the following parameterization: A(0) = 100, r = 0 (for the purpose of simplicity), v(0) = v = 0.01, v = 0.75, v = 0.01. As an example, consider a down-and-out call option with barrier D = 80 (constant due to r = 0) and strike D + K = 90. The underlying path 1 never falls below neither the strike nor the barrier and has a terminal value of 125. Thus, the payoff at maturity T = 5 is 125 90 = 35. Path 2 falls below the strike price but not below the barrier. This does not cause the option to expire worthless. As the terminal value lies above the strike price again, the payoff at maturity amounts to 104 90 = 14. Path 3 breaks the barrier after two and a half years. Consequently, the option is knocked out from then on, and has a zero-payoff, even though the underlying recovers and has a terminal value above the strike. The aim is to price a barrier option with strike D(T) + K on the assets A: C(t, A) = EQ[ max{A(T) D(T) K, 0} 1{>T} | Ft] (3), where C(t, A) is option price of the barrier option with underlying A at time t and is the time of default of the option, i.e., the first time the asset process A crosses (or reaches) the barrier D. It is modeled as a stopping time on the interval (t, T]: = inf{t(t,T] : A(t) D(t)} (4).
70 100 120 130

asset price

110

90

80

0 path 1

1 path 2 path 3

2 strike barrier

time (years)

Figure 1 Simulated paths in the stochastic volatility model.

Note that solving (3) for K = 0 as a special case also results in C(t, A) = [A(t) D(t)]1{>t}. The model implied for the equity (5), which is an explicit function not only of the assets but also the volatility at time t, is quite rich. As a direct result from Itos lemma the presence of stochastic volatility on the equity can be shown. Moreover, due to (5) being a function of v(t), a correlation between the equity and the equitys volatility can be observed. This correlation could even be stochastic. For example, if we denote the log-equity as R = logC then the variance of R, the stochastic term in the variance of R, denoted as vR, and the correlation between R and vR, denoted R,v , are: dR = ()dt + R/lnA dZv + R/v v dZv = ()dt + dZ dZR.
R

dvR = ()dt + [g + v(g/v)] v The symbol EQ denotes the expected value under the arbitrage-free measure Q. Equation (3) describes a down-and-out call option with underlying A, strike D(T) + K and knock-out barrier D.
1

dZv + v(g/lnA)

R,vR = [v2(g/lnA)(g/lnA) + (g + v g/v) 2vv R/v] [ (6)

Whereas, for K=0 the value of the option can be shown to fulfill the straightforward formula C(t,A) = [A(t) D(t)]1{>t} using optional sampling theory, for the general case K0, the evaluation of the option pricing formula (3) is done inspired by the approach of Sepp (2006): Proposition 1: The price of the down-and-out call option (3) is C(t, A) = C(t, A, v) = ex+a G(T - t, x - a, v) D(t) (5), where a = ln [(D(T) + K)/ (D(T))], x = ln [A(t)/D(t)] and G(T-t, x-a, v) can be evaluated via G(T-t, x-a,
v) = e(x-a) - e-(x+a) 1/ 0 [e(T-t, k) + (T-t, k)v [cos((x-a)k) cos((x+a)k)]/

respectively, where ZR is a Wiener process in (, F, F, Q) and g(lnA, v) = (R/lnA)2 + 2v(R/v)2 The stochastic part of vR depends not only on the Brownian driving the volatility of the assets but also on the Brownian driving the asset itself due to the fact that g is dependent on both v and lnA. Figure 2 shows a plot of the correlation R,vR as a function of the log-asset and the volatility of the asset for the following set of parameters: = 0.075, v = 0.01 = v = 0.75, v = 0.1, D(0) = 4 (where the asset is assumed to be in the range [4, 12] at t=0), K = 0.2, r = 0.03, T = 1, v(0) = [0.002, 0.02].

k2 + ] dk, with

(T-t, k) = -[k2 + 1/4] (1 e-(T-t))/(- + + e-(T-t)) (T-t, k) = -vv/2v [(T-t) + + 2ln[(- + + e-(T-t))/2]] = v + = [2v + 2v (k2 + 1/4)]1/2
1 Sepp (2006) prices an option on the equity (which he denotes as S) which itself is modeled as an option on the assets A. This so-called compound option has similar characteristics as (3).

125

Calibrating the model


Correlation between log-equity and its volatility

Equity markets allow us to observe the value of companys stocks with high frequency. However, we only have scarce and inaccurate information on the daily value of the assets of a company. Thus, calibrating the asset process (1) of a company has the difficulty that this process itself is not observable. Fitting the parameters involves two steps. First, inverting the option pricing formula (5) to solve for A subject to the parameters v, v, and v and the actual fitting of the parameters to the asset process. Other than in the setting with k = 0, the parameters which are to be fitted are not only used to describe the asset process, but also for estimating the asset process from the equity process via the option pricing formula. Solving this problem by standard optimization algorithms would be too time consuming, because every step of fitting the asset process would require a recalculation (subject to the parameters of the current fitting step).2 Consequently, we propose a recursive algorithm which is evenly stable and robust, yet much more efficient.

0,0000

-0,2000

-0,4000

-0,6000

-0,8000

-1,0000 10,74 8,63 6,53 0,006 4,42 0,001 0,010

0,015

Asset value at time t.

Volatility of asset at time t

Figure 2 Correlation of log-equity and equity-volatility versus log-assets and assets-volatility.

Recursive fitting algorithm


The correlation in Figure 2 is negative which further support the usefulness of the model. Recall negative correlation is called leverage effect and accounts for smile or the skew structure of the volatility implied by options prices. The fitting method to estimate the parameters = (, v, v, and v) is a recursive methodology involving as many steps as we have data points for the time series. The time series considered are growing with each step. For step i, the analyzed equity time series consists of i data points. In order not to have too short a time series, we wait for the first 100 data points to start the estimation procedure. The algorithm is divided into several parts assuring both accuracy and efficiency at the same time. (A) Every single step, the following is done: we calculate the new asset data point applying the option pricing formula (5) and the parameters A formula for the risk-neutral probability of default can be derived similar to the price of a barrier option under deterministic interest rates. As (3) has an additional discount factor trast to (7), we need an additional factor (7). The result is provided next. Proposition 2: The risk neutral probability of default is: P(t, T) = P(t, T, v) = 1 [e1/2x+1/2a Gp(T t, x a, v) D(t)] (5), with under the expectation in conto arrive at the result for of step i-1 and with the new asset time series, estimate the new pa-

Probability of default
In this framework the default of a company for a fixed period of time (t, T) could be triggered by both the behavior at maturity T as well as the path of the assets before maturity. Consequently, the probability of default, denoted P(t, T) can be represented by the following expectation: P(t, T) = 1 EQ[1{A(T)-D(T)-K, 0} 1{>T} | Ft] (7).

rameters along the lines of Escobar et al. (2010).


(B) Every n(B) steps, the asset time series is updated completely, with

the just fitted parameter vector (i) the entire asset time series is
updated, i.e., all values are calculated according to the option pricing

formula (5) subject to (i). (C) Every n(C) steps, (i) is refined according to a grid, from every single
point in the grid, (i) is fitted. That set of parameters (i) is chosen

as new (i), which minimizes the error. That way, we prevent the algorithm from terminating in local minima, which could also be due to numerical issues.

a, x as in Proposition 1 and where Gp(T t, x a, v) can be evaluated via [D(T) + K]Gp(T t, x a, v) = [[1/0 [e(T-t, k) + (T-t, k)v (sin((x-a)k)+sin((x+a) k2 +1/4]dk

k)] k2 +1/4]kdk + [1/20 [e(T-t, k) + (T-t, k)v (cos((x-a)k)+cos((x+a)k)]

The reason why (B) and (C) are not done every single step (although that would be desirable) is simply a matter of computing time. As already

with (T - t, k), (T - t, k) as defined in Proposition 1.


2

126

Actually, for a time series of 10,000 data points we expect to wait a mans lifetime to get the final parameters.

The Capco Institute Journal of Financial Transformation


A General Structural Approach For Credit Modeling Under Stochastic Volatility

mentioned, the error function is very sensible (and thus smooth) in and , yet very rough (due to numerical reasons) in v and v. In order to prevent the algorithm from being stuck in a local minimum resulting from this unsmoothness, the grid overcomes this risk. In our analysis, we choose n(B) = 50 and n(C) = 50, which then requires approximately 10 hours for one time series of 10,000 data points.
0,4 0,2

mu

0,0

1000

2000

3000

4000

5000 v_inf

6000

7000

8000

9000

10000

0,015

0,010

Estimating the parameters


For estimating the parameters we use the estimators we derived in Escobar et al. (2010) based on the work by Genon-Catalot et al. (2000). In the following, a recovery test is applied in order to numerically validate the above-described method for fitting the parameters. The general idea behind it is to simulate several time series given a known set of parameters = (, v, v, and v) as well as the information on the debt, D and K. From the simulated asset time series, the according equity time series are calculated applying the option pricing formula (3). Given the equity time series as well as the information on the debt, the parameters of the underlying (yet unobservable) asset process are estimated. The so recovered parameters should then be equal to the initial parameters with which the time series had been simulated.

0,005

1000

2000

3000

4000

5000 kappa

6000

7000

8000

9000

10000

1,0

0,5

0,0

1000

2000

3000

4000

5000 eps

6000

7000

8000

9000

10000

0,2

0,1

0,0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

Figure 3 Estimated model parameters over time (assuming v known).

standard deviation
0,12

0,10

Recovering the parameters with kappa known


We first assume the parameter v to be known. To start with, one single time series is analyzed in detail. An asset time series has been simulated with the following parameters: = 0.075, v = 0.01 = v = 0.75, v = 0.1, D(0) = 4 (where the equity is assumed to have a value of 1 at t = 0)3, K = 0.2, r = 0.03, T = 5, = 1/252 (representing daily data) n = 10,000 (number of data points, i.e., approximately 40 years of data)). Figure 3 visualizes the evolution of the fitted parameters in every step from 100 to 10,000. The test aims to recover the parameters = (, v, v, v), assuming v to be known. The first 100 steps are not included in the graphs as the estimation starts after 100 data points. The resulting final parameters are the following: = 0.05755, v = 0.01107, v = 0.75, v = 0.1010 As this is only one single time series, which is examined here, it does not come as a surprise that the true parameters were not matched perfectly. Yet, the fitted parameters are very close to the true parameters. Having a look at the evolution of (i), v(i), and v (i) in Figure 3, one can see that the parameters are nicely converging as they scatter less, yet at the same time accounting for the new information added by the new data points. In order to emphasize this statement, Figure 4 shows that the standard deviation of the fitted parameters (i) up to step i is decreasing. Of course, one single time series does not allow for drawing any conclusions on the quality of the method. Consequently, we simulated 250-asset time series
3 4 An equity value of one together with a debt of four implies an approximate value of five for the asset. This scenario is a particular case of Figure 2. Note that for calculating the statistics, we did not consider those time series of which one of the parameters lies in the top or bottom 2 percent of the range of this parameter in order to exclude outliers.
0,02 0,08 0,06

0,04

0,00 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

Figure 4 Standard deviation of estimated (i) up to step i.

of 10,000 data points each with the same parameters as above. In one case, we calculated the equity from the simulated assets with K = 0.2, and in the other case with K = 0.05. Table 1 summarizes the outcomes of this test. The final parameters (i.e., in step 10,000) match the initial parameters very well.4 From top to bottom, the table gives information about the true parameters for = (, v, v, v), the means of the fitted parameters, the standard

127

K = 0.05

mu


True parameter Mean Standard deviation 5% conf. Median 95% conf. K = 0.2 True parameter Mean Standard deviation 5% conf. Median 95% conf. 0.0750 0.0761 0.0137 0.0535 0.0768 0.0987 0.0750 0.0760 0.0138 0.0533 0.0766 0.0987

v
0.0100 0.0098 0.0018 0.0068 0.0096 0.0127

0,4

v
0.7500 0.7500 0.0000 0.7500 0.7500 0.7500

v
0.1000 0.0915 0.0135 0.0693 0.0910 0.1138

0,2 0,0

1000

2000

3000

4000

5000 v_inf

6000

7000

8000

9000

10000

0,015 0,010 0,005

1000

2000

3000

4000

5000 kappa

6000

7000

8000

9000

10000

10 5

0.0100 0.0097 0.0018 0.0067 0.0096 0.0128

0.7500 0.7500 0.0000 0.7500 0.7500 0.7500

0.1000 0.0915

1000

2000

3000

4000

5000 eps

6000

7000

8000

9000

10000

0,4

0.0135
0,2

0.0692
0,0

0.0912 0.1138

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

Table 1 Recovering the parameters with kappa known

Figure 5 Estimated model parameters over time (including v).

mu

deviation, the 5 percent and 95 percent confidence interval (derived from that standard deviation assuming normally distributed parameters), and the median. As expected, the difference between K = 0.05 and K = 0.2 is negligible, indicating that the value of K does not have an influence on the quality of the fitting method.

0,4 0,2 0,0

1000

2000

3000

4000

5000 v_inf

6000

7000

8000

9000

10000

0,015 0,010 0,005

Recovering the parameters with kappa unknown


Of course, in reality, we do not know the true value for v. The difficulty is that two parameters, v and v, are describing the variance process of the assets. The variance process is, of course, not observable, and even the asset process itself which it describes is not observable (only the equity time series is). Thus, it is a challenge to accurately capture both of these twice-unobservable parameters.

1000

2000

3000

4000

5000 kappa

6000

7000

8000

9000

10000

2,0 1,0 0,0

1000

2000

3000

4000

5000 eps

6000

7000 8000 without penalty term

9000 with penalty term

0,2 0,1 0,0

1000

2000

3000

4000

5000

6000

7000 8000 without penalty term

9000 10000 with penalty term

Figure 5 shows the evolution of the fitted parameters over the same
time series as in Figure 3, yet now with v unknown. The two parameters and v evolve almost exactly the same as with v known with their final values deviating less than 0.1 percent in relative terms. Yet, the picture looks different for v and v. The final values for those two parameters are 0.492 and 0.0818, around which they scatter mostly, yet with a high noise (as compared with and v). The higher noise is due to the fitting being much more sensitive to the parameters v and v than to the other two for the reason just mentioned above. Other than that, v often hits its boundary which we set to the value 10 (relaxing that boundary to higher values made the v(i) hit that higher boundary). In order to overcome the shortcoming of the parameters v(i) and v(i) suddenly hitting the boundaries and scattering widely, we impose a pen128 alty term to the error function, which is to be minimized (namely , the

Figure 6 Estimated model parameters over time (including v, with and without penalty).
The graphs represent the estimations for , v, v, and v respectively. The last two graphs (for v and v) show the estimation with penalty (smooth curve) and without penalty (random-like curve). Both with and without penalty are the same in the first two graphs.

sum of squared deviations between theoretical and empirical estimators), preventing large jumps in the evolution of the single parameters. (i) = (i) + [v(i) - v(i -1)]2 + [v(i) v(i-1)]2. We examined various combinations for and . Of course, the higher the values that are chosen the more phlegmatic the parameters behave, and, conversely, the lower the values the faster they adjust. We found = 0.05 and = 0.5 to be reasonable values. Figure 6 compares the original

evolution of the parameters (from Figure 5) with the evolution imposing

The Capco Institute Journal of Financial Transformation


A General Structural Approach For Credit Modeling Under Stochastic Volatility

K = 0.2

100


True parameter Mean Standard deviation 5% confidence Median 95% confidence 0.0750 0.0762 0.0140 0.0531 0.0767 0.0992

v
0.0100 0.0096 0.0018 0.0067 0.0094 0.0126

v
0.7500 1.4831 1.4479 0.0000 0.8410 3.8647

v
0.1000 0.1148 0.0590 0.0177 0.0983 0.2119

90

80

70

60

50

Table 2 Recovering the parameters with penalty function

40

30 2003

2004

2005

2006

2007

2008

the penalty. The differences can be observed only in the last two graphs, where the smooth curve represents the estimation with penalty and the random-like curve shows the estimation without penalty. Different scales are used in the last two graphs of Figures 5 and 6 in order to clearly show the smooth estimators. The figures show that the parameters and v do not deviate from the original parameter evolution irrespective of which penalty is chosen for v and v (note that no penalty is set for the two parameters and v). For v and v we observe that the evolution is smoother and these parameters do not exhibit jumps any more. To obtain the parameters above, we need the level of debt. This is obTable 2 summarizes the test for the 250 time series for which the parameters have been recovered, including v.5 Comparing the values for and v with those of Table 1, one can barely tell the difference. These two parameters could be recovered just as if v is known. However, now that v and v are targeted and both describe the unobservable variance process of the again unobservable asset process, v scatters with a big ger standard deviation than before, yet still very well around the true parameter. For v we can assess a skewed parametric distribution with a median of 0.84, very close to the true 0.75. Thus, we can conclude that the proposed fitting algorithm proved its quality in the presented complex problem, with its biggest advantage being that it makes problem computationally manageable. We next examine the sensitivity of the equity price and the probability of default towards K. These analyses are performed using the estimators for = (, v, v, v) as well as the values for [T, D(0), r] and equity provided above. We resort to two scenarios, first constant total debt D(0) + K and varying K is assumed. A second scenario assumes constant debt D(0) and varying K. In the first scenario K would represent the additional debt limit on the cash account, while the total debt is assumed constant and the assets remain the same, i.e., the parameters describing the asset process do not change subject to K. Figure 8 plots the equity price subject to K. We can observe a monotonously increasing equity value as K increases. However, the slope of the curve is decreasing, i.e., granting tained by averaging the reported ratio between assets and equity for the selected period. Scaling the equity time series for ease of comparison to start with the value 1 in 2003, we get D(0) = 12.6. Furthermore, the maturity T of the modeled option is required to correspond to the maturity of the liabilities. As the information is not provided in the annual reports, we assumed T = 5, but found that the parameters are hardly sensitive to the choice of maturity. In accordance with this assumption, we used a riskfree rate of 3.94 percent, which is the average of the five-year treasury rate over the years 2003 until 2007. to estimate the parameters for the Merrill Lynch time series between 2003 and 2007 for the presented model yields the following parameters assuming K = 0: = 0.04135, v = 0.0006383, v = 0.6002, and v = 0.02433.
Figure 7 Stock price of Merrill Lynch on NYSE (2003-2007).

Case study: Merrill Lynch


As an application of the model, we want to examine the downfall of the Wall Street investment bank Merrill Lynch. Founded by Charles Merrill in 1914, Merrill Lynch had always been an icon on Wall Street with its stock performing strongly over decades. Yet, the investment bank could not survive the financial crisis in which it suffered billions of losses from its activities in the MBS and subprime markets. Merrill Lynch was taken over by the Bank of America in September 2008 and has been part of Bank of America since January 2009. Figure 7 displays the stock price of Merrill Lynch stocks quoted on the New York Stock Exchange (NYSE) over the five years from 2003 until 2007, i.e., prior to the takeover. Using the method of calibration as described in Escobar et al. (2010)

Again, for calculating the statistics, we did not consider those time series of which one of the parameters lies in the top or bottom 2 percent of the range of this parameter. Prior to that, we excluded those time series where the boundary was hit (which was the case for 18 of the 250 cases). In those 18 cases, v did not jump to the boundary, but monotonously increased until it reached the boundary.

129

relative equity price


1,25

probabilitiy of default
0,32 0,31

1,20

0,30
1,15

0,29 0,28

1,10

0,27 0,26

1,05

0,25
1,00

0,24 0,23

0,95 0,0

0,1

0,2

0,3

0,4

0,5

0,6

0,7

0,8

0,9

1,0

0,22 0,0

0,1

0,2

0,3

0,4

0,5

0,6

0,7

0,8

0,9

1,0

value for K

value for K

Figure 8 Relative change in equity price subject to K. Assuming D+K constant.

Figure 10 Change in probability of default subject to K. Assuming D+K constant.

relative equity price


1,0

probabilitiy of default
0,55

0,9 0,50 0,8 0,45

0,7

0,6

0,40

0,5 0,35 0,4

0,3

0,0

0,1

0,2

0,3

0,4

0,5

0,6

0,7

0,8

0,9

1,0

0,30 0,0

0,1

0,2

0,3

0,4

0,5

0,6

0,7

0,8

0,9

1,0

value for K

value for K

Figure 9 Relative change in equity price subject to K. Assuming D constant.

Figure 11 Change in probability of default subject to K. Assuming D constant.

a higher cash limit is more effective for the first dollar than the second. If the company holds, for example, 1 percent of its market capitalization as a cash liability (which would not cause an immediate default as it is the case for the regular liabilities), the equity value increases from 1 to 1.0031 if the assets of the company stay the same. If a cash limit of 10 percent is granted (and used instead of taking regular debt) the equity value increases to 1.0289. And if (consider that as an illustrative, not realistic case) this debt would be of the same size as the equity value (in the case K = 0) itself, the equity value would increase to 1.1990. These findings can also be explained theoretically: granting a higher limit on the companys cash account simply lowers the default barrier while leaving the total debt untouched. This corresponds to a lower barrier line in the 130 illustration given in Figure 1. Lowering the barrier simply decreases the

number of paths being knocked-out and expiring worthless. Thus, the lower the barrier the higher the value of the option price. Figure 9 plots the equity price as a function of K under the second scenario, that of constant debt. The figure shows a steep decrease of the equity price when additional debt is taken in the form of a cash account. This can also be observed from Equation (3), an increase in K implies lower chances for the assets to be above the new total debt at maturity [D(T) + K], therefore decreasing the expected value. Note the indicator term in Equation (3) does not change because of the assumption of constant debt. The analysis of the probabilities of default is performed first under the

The Capco Institute Journal of Financial Transformation


A General Structural Approach For Credit Modeling Under Stochastic Volatility

first scenario. In this scenario, a decrease in the probability of default is observed when plotting this probability versus K, see Figure 10. This decreasing behavior of the probability of default could also be implied from Equation (7). In this scenario the total debt remains constant therefore the probability of defaulting at maturity remains the same for all K. On the other hand, an increase in K leads to a decrease in regular debt D(t) hence lowering the probability of default prior to reaching maturity. This insight favors the use of K as an alternative to taking regular debt. The second scenario is explored in Figure 11. Here K is taken as additional debt so the total debt D(0) + K increases. This implies an increase in the probability of default. This can be observed from Equation (7), an increase in the total debt means an unchanged probability of default before maturity but a higher probability at maturity. This shows the risk in taking additional debt in the form of a cash account instead of transforming regular debt using a cash account. A question, which might be even more interesting, is how the estimated parameters change if we still observe the same equity time series and assume different values for K while total debt [D(0) + K] is kept constant. This answer is provided in Table 3 giving the estimated parameters , v, v, v, subject to K. From this table, we can make the following obser vation: due to the fact that an increase in K would lead to higher equity values (Figure 8), the estimated drift of the asset process should monotonously decrease with K. Intuitively, if we now fit the parameters to the same equity time series (but increased K), the assets do not have to perform that strongly as before to result in the same equity values. This decrease in drift gives room for an increase in volatility. Note that the long-term mean v of the variance of the asset process is monotonously increasing for higher values of K. All other parameters are calibrated around these relationships. Examining the parameters v and v, we cannot observe a monotonous trend as we did for the other two parameters and v. However, we have to bear in mind that these parameters are much harder to capture because they are describing the variance process, and that we only have a comparatively small dataset consisting of five years of daily data to calibrate the parameters. Having said that, a general decrease in the param eter v can be assessed, whereas we cannot observe a clear trend for v. The trend in v can also be explained by the nature of the model. A higher v means that the variance is returning faster to its long-term mean v. Thus, for low values of v, the likelihood of an extremely high variance increases and so does the likelihood that the option matures worthless. This is perfectly in line with the argument raised for and v. We have studied the implications of assuming K>0. In general, a positive

K 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1


0.04135 0.04134 0.04132 0.04131 0.04130 0.04128 0.04127 0.04125 0.04124 0.04122 0.04121 0.04104 0.04086 0.04067 0.04047 0.04026 0.04005 0.03983 0.03962 0.03940

v
0.0006383 0.0006393 0.0006411 0.0006429 0.0006446 0.0006463 0.0006480 0.0006496 0.0006512 0.0006523 0.0006544 0.0006681 0.0006792 0.0006883 0.0006955 0.0007016 0.0007066 0.0007101 0.0007148 0.0007183

v
0.6002 0.5869 0.5774 0.5872 0.5795 0.5758 0.5607 0.5610 0.5602 0.5558 0.5511 0.5348 0.5160 0.5000 0.5023 0.5022 0.5152 0.5044 0.5022 0.4940

v
0.02433 0.02417 0.02403 0.02424 0.02405 0.02409 0.02379 0.02383 0.02386 0.02374 0.02371 0.02360 0.02341 0.02322 0.02341 0.02350 0.02398 0.02383 0.02370 0.02365

Table 3 Estimated parameters for different values of K

K could be beneficial if it replaces regular debt as it reduces the probability of default (Figure 10) and increases the equity value (Figure 8). On the other hand, taking a positive K at the expense of increasing the overall debt leads to the opposite scenario hence a higher probability of default (Figure 11) and a lower equity value (Figure 9). Unfortunately, K cannot be properly estimated from a time series of equity and debt values due to a problem of identifiability, and should thus be obtained from additional information from the company. In fact, if we examine the annual reports of Merrill Lynch, we learn that the short-term borrowings (divided by the total stockholders equity) increase in upwards-sloping markets and decrease in recessions. Over the calibration period 2003-2007, the short-term borrowings average to a value of approximately 0.4. Let us assume, for an expository sensitivity analysis, that K can be represented by the short-term borrowings and apply the parameters in Table 3 for K = 0.4. If we furthermore assume that the total liabilities remain constant, but K is eaten up totally by a market crash, Proposition 2 tells us that the probability of default increases from 25.6 percent to 32.96 percent and the equity value drops by 8.87 percent.

Conclusion
This paper continues the work of Escobar et al. (2010) where we combined the Heston model with the Black/Cox framework providing a model for the companys asset process if only the equity process is observable. The assets follow a Heston model, and the companys equity value is 131

modeled as a barrier call option on the assets. In contrast to our previous work, we now allow the barrier and strike price of the option to be different from each other and thus for richer financial applications. We present a closed-form solution of the option price, the equity, in this model, which allows for most stylized facts on the equity process. This much more complex option pricing formula, including an improper integral, demonstrates the difficulties one often comes across. The computational effort to optimize a set of parameters is beyond available computational power, because in this case it requires the evaluation of the formula (and thus also the integral) in every iteration of the optimization for every point of the time series, which is calculated as the inverse of this option pricing formula. We present a method to overcome this problem, which drastically limits the number of times the option pricing formula has to be evaluated. This method could also be transferred to various financial applications, for example: in the reduced form credit framework where the unobservable intensity is the one modeled and calibrated based on observable credit derivative prices; for fixed income products where bond prices are observed but the unobservable instantaneous interest rate is the one being modeled and therefore calibrated; or in cases where the implied volatility is targeted and the observable values come from option prices. Besides numerical validation of the convergence of the proposed fitting algorithm, we provide a possible application of the model with Merrill Lynch as an example. We show the sensitivities of the model parameters subject to the relationship between the barrier and the strike price and give a theoretical interpretation of this behavior.

References

Black, F. and J. Cox, 1976, Valuing corporate securities: some effects of bond indenture provisions, Journal of Finance, 31:2, 351-367 Cont, R., 2001, Empirical properties of asset return stylized facts and statistical issues, Quantitative Finance, 1, 223236 Cox, J., J. Ingersoll, and S. Ross, 1985, A theory of the term structure of interest rates, Econometrica, 53:2, 385-408 Dewynne, J., S. Howison, and P. Wilmott, 1993, Option pricing, Oxford Financial Press Engle, R., 1982, Autoregressive contidional heteroscedasticity with estimates of the variance of United Kingdom inflation, Econometrica, 50:4, 987-1008 Escobar, M., T. Friederich, M. Krayzler, L. Seco, and R. Zagst, 2010, Structural credit modeling under stochastic volatility, submitted to the Journal of Financial and Quantitative Analysis Feller, W., 1951, Two singular diffusion problems, The Annals of Mathematics, 54:1, 173-182 Genon-Catalot, V., T. Jeantheau, and C. Laredo, 2000, Stochastic volatility models as hidden Markov models and statistical applications, Bernoulli, 6:6, 1051-1079 Heston, S., 1993, A closed-form solution for options with stochastic volatility with applications to bond and currency options, Review of Financial Studies, 6:2, 327-343 Lipton, A., 2004, Mathematical methods for foreign exchange. A financial engineers approach, World Scientific Publications Merton, R., 1973, Theory of rational option pricing, Bell Journal of Economics and Management Science, 4, 141-183 Rosenblatt, M., 1956, A central limit theorem and a strong mixing condition, Proceedings of the National Academy of Sciences, 42, 43-47 Seber, G. A. F. and C. J. Wild, 2005, Nonlinear regression, John Wiley and Sons Sepp, A., 2006, Extended credit grades model with stochastic volatility and jumps, Wilmott Magazine, September, 50-62

132

PART 2

A Stochastic Programming Model to Minimize Volume Liquidity Risk in Commodity Trading


Emmanuel Fragnire University of Bath School of Management, and Haute Ecole de
Gestion de Genve

Iliya Markov School of Mathematics, University of Edinburgh

Abstract
The goal of this paper is to study a very important risk metric in commodity trading: volume liquidity risk. It begins by examining the statistical properties of volume and settlement price change of futures contracts of different maturities. The results are used in the construction of a model for the minimization of volume liquidity risk the inability to cover an unprofitable position due to lack of trading volume. The model is embedded in a stochastic program designed to construct

a portfolio of futures contracts of different maturities with the aim of minimizing price and volume liquidity risk. The results of the case study (grain market) show that the model predicts the best spread trade accurately in 75 percent of cases. In the remaining cases the inaccuracy is due to the market shock present in the year 2008. A tool has been coded in Excel VBA to make the model available to traders and risk managers. This contribution directly relates to Energy ETF recent issues (i.e., roll-over). 133

Liquidity risk is already an important factor in the management of financial risk. The current global economic crisis emphasized the need to include liquidity risk in risk management models. The financial meltdown of 2007-2008 saw the freezing of the markets for commercial paper, assetbacked securities, and collateralized debt obligations among many others. Various risk models were proposed that try to deal with liquidity dryups in the financial markets in different ways estimation of probability, pricing of liquidity risk, etc. [Pedersen (2008)]. Liquidity in the commodity sector, on the other hand, has not been given a proper academic treatment. The main trading products in the commodity sector are futures contracts. Traders usually hedge risk by entering into contracts of different maturity and opposing direction. Distant maturities, however, are associated with very low trading volume, which means that an already established position may not be closed or changed due to lack of trading volume. Thus, if an already established spread trade is found to be unprofitable, a trader may not be able to cover it because of what this paper refers to as volume liquidity risk, or simply volume risk. This paper sets out to explore the patterns and relationships associated with volume and settlement price change and to build a stochastic program whose purpose is the construction of a portfolio where the risk of a volume liquidity trap is minimized or avoided. The use of stochastic programming is grounded on the uncertainty of the prices of distant maturities on the forward curve. The ultimate purpose of the paper is that the stochastic model should be applicable to real life situations. Consequently, it was developed in VBA in Excel, which is the most widely used software in the commodity sector. All data used in this paper is from the Kansas City Board of Trade [KCBT (2010)], the worlds largest exchange for hard red winter wheat.

Commodity futures Eurodollar interest rate three-month Euribor interest rate Short sterling 30-day Federal funds 3-Year Commonwealth t-bonds Mini S&P 500 index Crude oil light sweet Silver 5000 troy ounce Japanese Yen British pound Wheat hard red

Board CME LIFFE LIFFE CBT SFE CME NYM NYM CME CME KCBT

Relative contract liquidity >> >>

Table 1 Relative contract liquidity [Futures liquidity June 2010, (2010)]

wheat futures traded at the KCBT are last, which means that they are relatively illiquid. Consequently, the results obtained in subsequent sections should be especially useful to traders at the KCBT. The analysis and conclusions, however, are applicable to any commodity trading board. An important contribution of this paper is the fact that the analysis is focused on commodity instead of financial futures. The good performance of the financial markets until a few years ago led to a lateral treatment of the commodity sector in academic literature. Partly due to the financial meltdown, however, recent years have seen the increased importance of the precious metals, oil, and grain markets. The increased participation of hedge funds in the commodity markets is another sign of their growing importance. There are numerous papers that study positions optimization and the volumes associated with different maturities on the forward curve. The two concepts, however, are never integrated and volume risk is never taken into account when establishing futures positions. Boyd and Kaastra (1995), for example, devise a model to forecast futures trading volumes of different maturities at the Winnipeg Commodity Exchange. Their model is an application of a neural network, an artificial intelligence method that acts like a human brain in finding patterns. The neural network uses a gradient descent algorithm and is able to provide predictions of trading volumes up to nine months into the future. Its predictive power is better than both the discussed nave model and the autoregressive integrated moving average model. De Roon and Veld-Merkoulova (2003) discuss long-term hedging strategies when only the first few contract maturities of a given commodity are actively traded. Instead of focusing on volume risk, however, they construct a hedging strategy using futures convenience yields that minimizes spot-price risk and rollover risk by using futures contracts of different maturities. This papers main contribution, therefore, is the introduction, explanation, and modeling of volume risk in

Literature review
The purpose of the literature review below is to position the paper in the context of the three main academic areas with which it is concerned. First and foremost, it discusses the lack of academic literature dealing with the concept of volume risk. Second, it gives a brief overview of stochastic programming and the reasons why it was chosen as the portfolio optimization technique. Third, it explains what past statistical results concerning futures related time series this paper corroborates and what contributions it makes.

Volume risk
To the best of our knowledge, currently there is no academic literature that deals with the problem of volume liquidity risk as outlined above. Table 1 is a demonstration of the volume liquidity of certain commodity and financial futures traded at various boards throughout the world. Table 1 lists the futures according to their liquidity. The more liquid the 134 contracts, the easier it is to buy and sell. Interestingly, the hard red winter

The Capco Institute Journal of Financial Transformation


A Stochastic Programming Model to Minimize Volume Liquidity Risk in Commodity Trading

the commodity futures markets a problem that has not been rigorously tackled before.

on volume but no evidence of its impact on settlement price change. The distributions of settlement price change for the ten future maturities are almost identical. On the other hand, the paper finds a strong maturity effect on volume, with the distributions changing significantly with maturity. The main advantage of the exploratory statistical analysis conducted in this paper is its practicality. It is aimed at enhancing commodity traders knowledge about many of the important statistics pertaining to the market of hard red winter wheat by adding rigor to their intuition. Even though practicality takes priority over sophistication, all of the statistical analysis is done rigorously.

Stochastic programming
This paper employs a stochastic programming approach in the development of a model that determines portfolio positions with the aim of minimizing price and volume risk. Fragnire and Gondzio (2005) demonstrate why stochastic programming is the preferred optimization methodology in the presence of uncertainties about the future. In the case of deterministic modeling, a separate optimization is performed for each scenario. In the end, the decision maker knows what the optimal decision is under each scenario. But not knowing which scenario will unfold makes this information unusable. Stochastic programming combines all the uncertain future scenarios in a single tree structure and the system is then optimized with the aim of finding the optimal decision given all possible scenarios. Fragnire et al. (2010) explain the description of future conditions by means of a multi-period stochastic tree, where the branches, referred to as scenarios, represent the uncertainties in the future. An alternative approach is the generation of scenario sample paths, where the only branching point is the origin. The latter approach circumvents the problem of dimensionality when the number of stages is significant, a problem known as the curse of dimensionality [Fragnire et al. (2010)]. Since the number of effective decision stages in this papers stochastic program is three, the stochastic tree structure is adopted because it is thought to better represent the traders decision process. The trader makes decisions depending on what the forward curve structure is. Since distant maturities are associated with great uncertainty, while close maturities converge to the spot price, the system is best represented by a stochastic tree that branches out with maturity.

Statistical analysis
This section is intended to provide an exploratory statistical analysis of the first ten maturities of some of the most important time series that futures traders face volume and settlement price change. The first part discusses some important seasonal patterns in the first ten maturities of the time series. The second part analyses the relationships of volume and settlement price change and studies the distributions of the time series. The analysis and results should help place the concept of volume risk on solid ground by analyzing volume in the context of other time series. In addition, the results of the statistical analysis are directly used in the case study of the stochastic model below. All of the analyses are performed using futures trading data from 1 January 2005 to 1 June 2010.

Patterns
The analysis of the seasonal patterns of volume allows us to make the following observations: 1. The average volume decreases strictly with maturity. 2. The standard deviation of the volume of the first maturity is always lower than the standard deviation of the volume of the second maturity. From then on, standard deviation decreases until the most distant maturity but not perfectly. It may have spikes. 3. There is a pattern in the distribution of volume with some delivery months being above the average and some below the average: July and December are above the average everywhere (except the extreme back end of December); September is above the average only for the nearest maturity and below the average everywhere else; March and May are below the average everywhere with one exception for March; and May has the smallest average volumes of all maturities. 4. There is also a pattern in the distribution of volume across the maturity timelines of different months that the overall results are unable to pick up. (1) The contracts expiring in March, May, and September have large volumes of the first two maturities and thereafter volumes decrease sharply. (2) The contracts expiring in July and December have volumes that decrease much more gradually across the maturity timeline. These also happen to be the contracts with the highest volumes of all maturities. 135

Statistical analysis
There is a lot of academic literature that deals with the relationships among futures-related time series. The most profoundly researched relationship is the one between volume and price variability (or return from another point of view). Grammatikos and Saunders (1986) find a strong positive relationship between trading volume and price volatility. Moreover, maturity has a strong effect on volume, but no effect on price volatility. Karpoff (1987) finds a strong positive relationship between volume and both the absolute value of price changes and price changes per se in equity markets. He builds a simple volum-price change relationship model. Wang and Yau (2000) also establish a positive relationship between volume and price change. This paper finds a strong positive relationship between volume and the intraday price volatility, but no relationship between volume and settlement price change. Also, there is strong evidence of an impact of maturity

5. The volumes of all maturities move in the same direction from one delivery month to another. The pattern is as follows: March decrease May increase July decrease September increase December decrease March. The last relation (December decrease March) does not hold for the two most distant maturity dates (which could be due to insufficient data). 6. Standard deviation is not proportionate to average volume. The ratio of the mean to the standard deviation of the volumes decreases with each successive maturity on the maturity timeline (but not perfectly). This is true both overall and for each delivery month. This means that volumes of distant maturities are much more volatile around the mean, which, of course, poses greater risk. The increased volatility can be explained by the large number of zero values and the occasional spikes to several tens or hundreds of traded contracts. It is also noticeable that the volumes for some months are generally more volatile around the mean than other months. For example July and December have much more volatile volumes than May. Given the findings above, let us turn to the shape of the volumes associated with the forward curve. Please refer to Figure 1 and 2. As corroborated by the findings above, volumes associated with July and December deliveries are always higher given any individual forward curve and this is a rule rather than a coincidence. Thus, volumes for July and December deliveries form spikes, or humps, in the forward curve volumes. The shapes of forward curve volumes in Figures 1 and 2 illustrate why volume liquidity risk is an important risk in commodity trading. The low trading volumes of the long term maturity contracts make covering and reestablishing long term positions extremely difficult. The model proposed in this paper tries to deal with volume liquidity risk in a structured manner. There are some interesting patterns associated with settlement price change as well: 1. The standard deviation is much larger than the mean in absolute value. This would suggest that price change is very volatile around the mean. 2. Looking at the overall results, the nearest maturity has a very slight tendency to drop in price, while all other maturities have slight tendencies to increase in price. 3. There is no visible seasonal pattern in the means. To be more precise, means for different delivery months are not consistently and significantly above or below the overall mean. 4. The standard deviations of the sixth maturity of March and May deliveries are disproportionately high, which is also present in the overall standard deviation. 5. The standard deviations for September, July, and December exhibit an 136 interesting pattern: July has increased standard deviations of maturi-

7000 6000 5000 4000 7000 3000 6000 2000 5000 1000 4000 0 3000

Forward curve volumes

01 /M ar /0 01 9 /M ay /0 9 01 /J ul 01 /09 /S ep 01 /09 /N ov /0 01 9 /J an /1 01 0 /M ar /1 01 0 /M ay /1 0 01 /J ul 01 /10 /S ep 01 /10 /N ov /1 01 0 /J an 01 /11 /M ar /1 01 1 /M ay /1 1 01 /J ul /1 1


2000 1000

Forward curve volumes

01 /M ar /0 01 9 /M ay /0 9 01 /J ul 01 /09 /S ep 01 /09 /N ov /0 01 9 /J an /1 01 0 /M ar /1 01 0 /M ay /1 0 01 /J ul 01 /10 /S ep 01 /10 /N ov /1 01 0 /J an 01 /11 /M ar /1 01 1 /M ay /1 1 01 /J ul /1 1

Figure 1 Forward curve volumes, 9 Jan, 2009 20000


18000 16000 14000 12000 20000 10000 18000 8000 16000 6000 14000 4000 12000 2000 10000 0 8000 6000 4000 2000 0

Forward curve volumes 01/Jan/11 01/Apr/11 01/Jun/10 01/Feb/11 01/Mar/11 01/May/10 01/May/11 01/Nov/10 01/Dec/09 01/Sep/10 01/Dec/10 01/Aug/10 01/Mar/10 01/Feb/10 01/Jun/11 01/Jan/10 01/Oct/10 01/Apr/10 01/Jul/10 01/Jul/11

Forward curve volumes

01/Jan/11

01/Apr/11

01/Jun/10

01/Feb/11

01/Mar/11

Figure 2 Forward curve volumes, 23 Oct, 2009

ties 2, 3 and 7, 8; September has increased standard deviations of maturities 3, 4 and 8, 9; and December has increased standard deviations of maturities 4, 5 and 9, 10. The sequences above look suspicious at first glance but there could be an explanation. A quick mental exercise reveals the following: the months associated with trading May as a sixth maturity are March and April. The months associated with trading July as a second, third, seventh, and eighth maturity, September as third, fourth, eighth, and nineth maturity, and December as fourth, fifth, nineth, and tenth maturity are the same and they are December, January, February, March, and April. Historical data from the Kansas City Board of Trade reveals an enormous increase in volatility in the beginning of 2008. Consequently, ignoring December in the two points above, all of the months are in the beginning of the year.

Relationships and distributions


This section analyzes some important relationships and approximates volume and settlement price change by analytical distributions. The empirical distributions are approximated with various degrees of precision as described below. The results below help us in the construction of the stochastic tree in the case study section. There are strong relationships between the average maturity values of volume and other important series in commodity trading such as drawdown, the difference between daily high and low and open interest. Settlement price change does not

01/May/10

01/May/11

01/Nov/10

01/Dec/09

01/Sep/10

01/Dec/10

01/Aug/10

01/Mar/10

01/Feb/10

01/Jun/11

01/Jan/10

01/Oct/10

01/Apr/10

01/Jul/10

01/Jul/11

The Capco Institute Journal of Financial Transformation


A Stochastic Programming Model to Minimize Volume Liquidity Risk in Commodity Trading

exhibit a relationship with any of the series, not even volume. The relationships between the average maturity values of volume on the one hand and drawdown, daily high/low and open interest on the other hand are very well described by power regressions. In all three cases the adjusted R-square is higher than 0.99.
Cash in short term long position long term short position Cash out

The first six maturities of volume can be well approximated by analytical distributions. Even though the best distribution is not Gamma in every case, Gamma is usually the second or third best and has a slightly higher AD. The seventh through tenth maturities cannot be well approximated by analytical distributions. It is easily observed that the mode of the distribution moves closer and closer to zero with each successive maturity due to the fact that distant maturities are associated with low volumes. In other words, there is a strong maturity effect on volume. The best distribution for each one of the maturities of settlement price change is either logistic or log-logistic. Since the difference in AD between the logistic and the log-logistic distribution in every case is minute, logistic distribution was fitted to all maturities. There is no evidence of a maturity effect on settlement price change. The logistic distributions of settlement price change are almost identical for all maturities. contract and losing money on the long term contract through the daily account settlement. Further, assume that the trader wants to cover his short position on the long term contract by going long on the same contract. If he is unable to buy he will continue losing money on the contract. Actually, there is a great chance that he will not be able to buy because volumes in the back end of the forward curve are very low and many times there is no trade at all. The loss that the trader will make depends on when he will be able to cover his position, which depends on when volume will be sufficient. Of course, there will not be any volume risk of this kind associated with the first two or three maturities (depending on the delivery month) because they have sufficient trading volume. In order to quantify volume risk, we should have a subjective measure as to what number of contracts traded on a given day is safe and does not pose any risk. Then for any number of traded contracts lower than the threshold level there will be risk. Let SafeVol be the threshold level and V be the average number of contracts actually traded and let V SafeVol. Let MaxPriceChangetok indicate the price change in an unfavorable direction at a given confidence level that the trader may experience from the time t when the cover has to take place to the time ok when there will be sufficient volume associated with this maturity. This time interval can be inferred from the statistical analysis on volume presented above. The price change can be inferred from the settlement price change distribution discussed above. Then VolumeRisk = (SafeVol-V) MaxPriceChangetok SafeVol
Figure 3 Futures account settlement

Methodology
This section is the crux of the paper. It explains how volume risk presents itself as the inability of a position to be covered or changed due to lack of trading volume, which could lead to a monetary loss. It also explains the interconnectedness of volume risk and price risk. A simple model for quantifying volume risk is presented. This model is then included in a stochastic optimization algorithm, whose aim is the construction of a portfolio, which minimizes or avoids the risk of a volume liquidity trap. The algorithms objective is the minimization of volume and price risk. The minimization is subject to a number of core, optional, and mutually exclusive conditions, which correspond to a traders preferences. These include the establishment of a bull or bear spread, the specification of the minimum and maximum number of contracts, etc. The predictions of the stochastic program are tested in the case study section. The case study employs results from the statistical analysis section.

Volume risk
The problem that this paper sets out to explore is a very important risk associated with commodity trading. Volume risk is the risk that there might not be sufficient volumes associated with distant maturities if a trader decides to unwind a position. This could lead to a much larger loss than the one associated simply with price changes. The problem is how to quantify this risk. Let us suppose a trader is long on the short term and short on the long term (Figure 3). Assume the trader is making money on the short term

Model preliminaries
Figure 4 is a diagrammatic representation of the forward curve that is used in the stochastic model. In line with intuition, prices in the near end are much more certain than prices in the back end. Every combination of successive cases represents a possible combination of prices associated with the successive maturities. It should be mentioned here that the maturities on the diagram do not coincide with the maturities discussed above. This is a simplified construction, where the first maturity represents the short term maturities, the second maturity represents the medium term maturities, and the third maturity represents the long term 137

Figure 5 presents the scenario development and highlights the regions


Case 31 50% Case 21 50% 50% Case 32

where there could be volume risk. Two regions are highlighted. The inner region represents volume risk in the second stage. The outer region represents volume risk in the third stage and includes the possibility that volume risk does not disappear in the second stage. The first stage is always assumed to have no volume risk. Volume risk in the second stage takes into account the number of con-

Case 11

Maturity 1

50% Case 22

50%

Case 33

tracts in the second stage, the probability of the leg occurring, the value of the unfavorable price movement that would occur if the trader is unable to unwind times its probability and the volume risk ratio as expressed by the formula [(SafeVol V)/(SafeVol)]. Volume risk in the third stage is more

Maturity 2

50% Case 34

complicated because it takes into account both the third and the second stage. If the second stage has no volume risk, then volume risk in the third stage is calculated in the exact same way as volume risk in the second stage. If, however, there is volume risk in the second stage, volume risk in the third stage is calculated by taking into account the [(SafeVol V)/(SafeVol)] ratio in the third stage times the unfavorable price movement from the third to the second stage times its probability plus the [(SafeVol V)/(SafeVol)] ratio in the second stage times the unfavorable price movement from the second to the first stage times its probability. The sum is tested for being unfavorable or favorable as a whole. Only if unfavorable, is it multiplied by the probability of the leg occurring and the number of contracts in the third stage.

Three decision stages


Figure 4 Forward curve model

Maturity 3

Case 31 50% Case 21 50% 50%

Case 32

Stochastic model
Case 11

As explained in the literature review section, a stochastic program is


50% Case 22 50% Case 33

Maturity 1

thought to better represent a traders decision process as it accounts for the uncertainties associated with the prices of contracts with distant maturities. Another important reason for the application of stochastic programming is the problem of friction. First of all, positions on a given

Maturity 2

50%

Case 34

forward curve are established simultaneously by taking into account the fact that prices of distant maturities are uncertain. Decisions are not made sequentially in each stage. Second, the presence of volume risk means that a given position may not be covered instantly in order to adjust to changing circumstances. Consequently, optimization of individual scenarios does not convey accurate information about the best spread trade. The stochastic model developed for this paper is essentially a historical simulation. Consequently, it has the same shortcoming that all historical simulations have application of past data to predict the future. In order to benefit from the model users need to understand the logic behind using a historical simulation to infer the future: 1. A given model gives a structured framework for thinking about risk [Adapted from Jorion and Taleb (1997)]. In this sense, a model structure is superior to intuition because it lays out a systematic way of

Maturity 3
Figure 5 Volume risk model

maturities of a real forward curve. The price change distribution discussed above is obtained in the direction described by the solid black arrows in Figure 4, i.e., prices further from the delivery date are subtracted from prices closer to the delivery date. In order to construct the case system above, however, we need price changes where prices closer to the delivery date are subtracted from prices further from the delivery date, which is described by the dashed black arrows. We can obtain the distribution described by the dashed arrows by reflecting the distribution described by the solid arrows 138 around the y-axis.

The Capco Institute Journal of Financial Transformation


A Stochastic Programming Model to Minimize Volume Liquidity Risk in Commodity Trading

managing risk. Once the model structure is found to be invalid, it should be subjected to review. 2. An optimization model gives the best trading strategy based on all the available information. Intuitive establishment of positions can rarely emulate the analytical precision of an optimization model. 3. Predictions made by a historical simulation model should hold reasonably well provided that there are no abnormal market movements [Adapted from Jorion (1997)]. Historical simulations do not have the ability to predict abnormal market shocks, unless those are present in the past data that were fed into the model.

Constraint or data Required return Initial price Volume risk

Value 1000 4000 Safe volume level is 100 contracts

Table 2 Case study constraints

2007 Minimized risk Pure price risk Volume risk Gain From long From short Contracts First stage Middle stage Third stage

Predicted 2197.76 2197.76 0 1960.5 3335.5 -1375

Realized 5637.08 5637.08 0 3543.25 9137.25 -5594

Case study
This section develops a case study designed to assess the usefulness and predictive power of the stochastic program explained above. The results of the case study show that the model predicts the best spread trade accurately in 75 percent of cases. In the remaining quarter of the cases the inaccuracy is due to the market shock present in the year 2008. The case study comprises five years of data, from 1 January, 2006 through 1 June, 2010. In each case, the model is calibrated using data from a given year n with the aim of predicting the best spread trade for the following year n + 1. The predictions are then back-tested using data from the year n +1. The case study was carried out using the tools specifically designed for the purpose. In order to make sensible comparisons, the requirements in Table 2 hold in all cases.

2 short 2 long 2 long

1 short 1 long 1 long

Table 3 Predictions and realizations for 2007

2008 Minimized risk Pure price risk Volume risk Gain From long From short Contracts First stage Middle stage Third stage

Predicted 2818.54 2818.54 0 1771.63 4568.63 -2797

Realized 1800.55 1800.55 0 1065.38 -657.13 1722.5

Predictions and realizations


Using data from 2006 to predict the best spread for 2007 produces very good results. The type of contracts in each stage is predicted with accuracy even though the number is exaggerated. Although the realized risk is larger than the predicted one, the realized gain is also significantly higher than the predicted one. Table 3 gives the predicted and realized values of the risk, the gain and the number of contracts. Overall, predictions are very good. Table 4 reveals that when data from 2007 is used to predict the best spread trade for 2008 the results are satisfactory as well. The realized gain is slightly smaller than the predicted one, but still larger than the required 1000. The realized risk is also smaller. Unlike in the previous case, the sources of profit are not predicted with accuracy. According to the predictions, the profit should come from the long positions. In reality, however, the profit comes from the short positions. Nevertheless, predictions can be classified as relatively good. Table 5 shows that unlike in the previous two cases, when data from 2008 is used to predict the best spread for 2009 results are not as good as

1 short 1 long 1 long

1 short No contracts 1 long

Table 4 Predictions and realizations for 2008

2009 Minimized risk Pure price risk Volume risk Gain From long From short Contracts First stage Middle stage Third stage

Predicted 1141.15 1141.15 0 2168.38 445.88 1722.5

Realized 1066.17 1066.17 0 88.5 -834 922.5

1 short No contracts 1 long

1 short 1 short 1 long

Table 5 Predictions and realizations for 2009

139

2010 Minimized risk Pure price risk Volume risk Gain From long From short Contracts First stage Middle stage Third stage

Predicted 1521.88 1521.88 0 1105.5 -834 1939.5

Realized 1806.33 1806.33 0 1189.5 -1582.5 2772

model was optimized in a way that volume risk was reduced to zero, given all the constraints that were imposed. In other words, provided that the proposed best spread trade strategy is established in each case, the portfolio will not be exposed to volume risk. A case in point is the example at the end, where the established spread trade is not the best one possible. Unsurprisingly, volume risk is present. A worse spread strategy will produce an even higher volume risk. As a comparison, the best spread trade in the last example is ten short contracts in the short term and ten long contracts in both the medium and long term. In addition to satisfying all constraints, this portfolio is predicted to incur a price risk of 1463.05 and a volume risk of only 14.09. Volume risk will generally be present whenever the portfolio includes positions liable to unfavorable price movements and insufficient volume. As a rule of thumb, higher gain is generally associated with higher risk. And even though analytical models can produce precise answers given large quantities of data, it is always down to the risk managers judgment to choose the best option [Fragnire and Sullivan (2006)]. The option to be chosen depends on the risk managers risk appetite and his overall strategy [Fragnire and Sullivan (2006)]. The same holds for the results of the stochastic model presented in Tables 3 to 6. To facilitate comparison, only the options with the lowest risk were presented. Other options may have higher risk awarded by a higher gain.

1 short 1 short 1 long

1 long 1 short 1 short

Table 6 Predictions and realizations for 2010

expected. Even though bull spread is both the predicted and the realized best spread trade, overall, the realized gain is much smaller than the required 1000. This is a consequence of the different shape of the forward curve in 2009 as compared to the one in 2008. The small realized gain, however, is mitigated by the lower value of any further possible losses as confirmed by the realized risk. Overall, predictions for 2009 are poor. Table 6 shows that the best spread for 2010 is not determined with accuracy using data from 2009. However, even though the predicted and the realized best spread trade do not coincide, this is due to a minute difference in the risk. If a bull spread is applied to the data from 2010, the risk is increased by only several units. Nevertheless, the predicted and realized values of both the risk and the gain are of comparable magnitude. Results can be described as good. The year 2008 contains an abnormal shock in terms of settlement price changes. In addition to that, the forward curve changes its slope during 2008. These abnormal shocks compromise the predictive power of the model, which can easily be seen above when data from 2008 is used to estimate the best spread trade for 2009. Like all historical simulations, this models predictive power is very good as long as the market does not produce abnormal shocks. The peculiar shapes of the price change distributions used above result in no volume risk associated with any of the predictions and realizations. To make a case for volume risk, we can also make predictions using a price change distribution based on data from 2005 to 2010 (1 June). Establishing a spread trade with ten long contracts in both the short- and the medium-term and ten short contracts in the long-term produces a volume risk of 761.64. Normally volume risk will appear when the established spread is not the most optimal one.

Conclusion
This is the first paper that considered volume risk in the optimization of a portfolio of commodity futures contracts. The volume liquidity risk is due to the low trading volume in the back end of the forward curves. The results of the case study suggest that the stochastic model is successful in determining the most optimal spread trade as long as markets do not behave abnormally. The case study provided a comparison of the predictive power of the stochastic program during both normal and abnormal market behavior. Like all historical simulations, the predictive power of the model is significantly reduced in the second case. Nevertheless, considering the rareness of such market shocks and the fact that the models predictions were valid in 75 percent of the cases, it could be classified as successful. Moreover, in all cases the volume risk of the determined best spread trade was reduced to zero. Even if the structure of the forward curve does not allow the complete elimination of volume risk, the models estimates will always produce the spread trade that involves the least amount of volume risk. The stochastic program developed for this paper has three decision stages. The advantages of extending the program to more stages are arguable. While it may theoretically lead to a more realistic description of the forward curve, the model will become too complicated and unwieldy. The number of scenarios grows at the powers of two with each successive stage and so does the number of constraints. Adding even one or

Volume risk
140 The values of the volume risk in Tables 3 through 6 above show that the

The Capco Institute Journal of Financial Transformation


A Stochastic Programming Model to Minimize Volume Liquidity Risk in Commodity Trading

two more stages will make integer optimization impractically slow. A possible solution would be the implementation of the model in XPress. Even so, the programmatic description of the model would be a challenge. What is more, this would defeat the purpose of the model being a practical and easy to use application in Microsoft Excel.

References

Boyd, M. S. and I. Kaastra, 1995, Forecasting futures trading volume using neural networks, Journal of Futures Markets, 15:8, 953-970 De Roon, F. A. and Y. V. Veld-Merkoulova, 2003, Hedging long-term commodity risk, Journal of Futures Markets, 23:2, 109-133 Fragnire, E. and G. Sullivan, 2006, Risk management: safeguarding company assets, Thomson NETg, Stamford, CT Fragnire, E., J. Gondzio, N. S. Tuchschmid, and Q. Zhang, 2010, Non-parametric liquidity adjusted VaR model: a stochastic programming approach, Journal of Financial Transformation, 28, 109-116 Grammatikos, T. and A. Saunders, 1986, Futures price variability: a test of maturity and volume effects, Journal of Business, 59:2, 319-330 Jorion, P., 1997, Value at risk: the new benchmark for controlling market risk, McGraw-Hill, London Jorion, P. and N. Taleb, 1997, The Jorion-Taleb debate. Derivatives strategy, available at: http://www.derivativesstrategy.com/magazine/archive/1997/0497fea2.asp Kansas City Board of Trade, 2010, Historical data, available at: http://www.kcbt.com/historical_ data.asp Kansas City Board of Trade, 2010, KCBT sets new daily trading volume record in HRW wheat futures, KCBT subscription service Karpoff, J. M., 1987, The relation between price changes and trading volume: a survey, Journal of Financial and Quantitative Analysis, 22:1, 109-126 Pedersen, L. H., 2008, Liquidity risk and the structure of financial crises, Presentation for the International Monetary Fund and Federal Reserve Board: New York University Stern School of Business, New York, USA, Available at: http://pages.stern.nyu.edu/~lpederse/papers/ LiquidityRiskSlidesLHP.pdf Traders.com, 2010, Futures liquidity - June 2010, Technical analysis of commodities and stocks, available at: http://www.traders.com/Documentation/FEEDbk_docs/2010/06/FutLiq. html Wang, G. H. K. and J. Yau, 2000 Trading volume, bid-ask spread, and price volatility in futures markets, Journal of Futures Markets, 20:10, 943-970

141

PART 2

The Organization of Lending and the Use of Credit Scoring Techniques in Italian Banks
1
Giorgio Albareto Structural Economic Analysis Department, Bank of Italy Michele Benvenuti Economic Research Unit, Florence Branch, Bank of Italy Sauro Mocetti Economic Research Unit, Bologna Branch, Bank of Italy Marcello Pagnini Economic Research Unit, Bologna Branch, Bank of Italy Paola Rossi Economic Research Unit, Milan Branch, Bank of Italy
Abstract
This paper examines the results of a survey carried out in 2007 by the Bank of Italy concerning different characteristics of the organization of lending activities. Between 2003 and 2006 the physical distance between the headquarters and the branches increased, the limits to the decision-making process of loan officers were eased, their mobility raised, and the use of economic incentives to reward their activity expanded. The huge heterogeneity in organizational structures persists even within relatively homogenous size classes. The diffusion of statistical models to assess credit risk (scoring) accelerated recently particularly among large banks, boosted by the new Basel Capital Accord. Scoring is either very important or determinant in decisions on credit extension while it is rarely influential in setting interest rates, the duration of the credit, and
1 The authors wish to thank Guglielmo Barone, Enrico Beretta, Luigi Cannari, Xavier Freixas, Giorgio Gobbi, Giacinto Micucci and Paolo Emilio Mistrulli for their comments. We are especially indebted to Guglielmo Barone for his help in formulating the questionnaire that was distributed to the banks. A special thanks also goes to Rino Ferlita and Angela Romagnoli, whose help during the various phases of the survey proved indispensable, and to Paolo Natile, for his assistance in the preparation of several programs. The survey was made possible by the kind cooperation of the Bank of Italys branch network. This paper is part of a research project at the Bank of Italy on Banking organisation and local credit markets [Cannari et al., 2010)]. The views expressed in this paper are those of the authors and do not involve the responsibility of the Bank of Italy.

the amount and type of collateral required. The survey shows that banks have been progressively adapting their organizational structure in order to incorporate the credit scoring tools into their lending processes.

143

Introduction and main results


During the 1990s, two major factors affected the Italian banking industry: liberalization and an intensive wave of technological innovation originating in the ITC sector. As a result, the banking system underwent a process of consolidation, banks expanded and entered new markets, and internal decision-making processes for granting loans were completely overhauled. The ways in which households and firms accessed credit changed. In the wake of these transformations, banks grew in size and organizational complexity; they now found themselves having to manage their presence in a number of different geographical and product markets. Large banks were not the only ones affected by this trend, as small- and medium-sized banks frequently joined larger banking groups or expanded internally; in both cases, the leaps in size sometimes led to organizational discontinuity. The rapid advances in ICT technologies had a profound effect on the output of the entire banking industry. These transformations imply that we need to have an updated and deeper knowledge of how banks organize the many aspects of their lending activities (customer screening, the terms and conditions of lending, monitoring of the borrowers conduct, etc.). The literature on bank-firm relationships generally treats banks as unitary entities and neglects the characteristics of their internal structure. Recently, however, the literature on organization has spawned several papers that emphasize the importance of the strategic interaction among managers in charge of various functions within the banking organization; these managers have different information and are bearers of interests that do not necessary coincide. It has been shown, both theoretically and empirically, that the ways in which this interaction occurs can affect the effectiveness of credit allocation, especially in the case of SMEs. One of the main consequences of technological change for credit markets was the introduction of credit scoring2 techniques based on standardized data. Despite their increasing importance, including in the Italian market, there are few studies on the diffusion and use of these procedures. To collect data useful for understanding these changes, the Bank of Italy conducted a survey in 2007 of over 300 banks, which represented the universe of intermediaries of a certain minimum size and organizational complexity. This report presents the results of that survey. The analysis of banks internal organization revealed profound differences among the Italian intermediaries from four standpoints: the geographical distance between a banks headquarters and its branch network; the decision-making autonomy of the branch managers, proxied by the amount of small business credit that they are authorized to extend in proportion to that which can be approved by the CEO; the length of the branch managers tenure; and the use of incentives for their remuneration. Part of this heterogeneity is accounted for by the size and institutional differ144 ences of the banks surveyed. However, some heterogeneity still exists

among homogenous groups of intermediaries. The results confirm how the internal structure of lending activities adapts to specific circumstances and forms a crucial component of banks competitive strategies. An implication of these findings is that an analysis of these phenomena must employ a broader and richer taxonomy than the traditional one that is based on the banks size. For most of the participating banks, the distance between their headquarters and the branch network increased between 2003 and 2006. Bank managers enjoyed greater mobility and autonomy in decision-making, and economic incentives were more frequently used for their remuneration. The results do not support the thesis that the advent of new technologies greatly diminished the role of bank managers with negative repercussions for banks direct interaction with SMEs. On the contrary, it is possible that lower communication costs favored the greater autonomy of local managers in the periphery (branches or local decision-making centers). Increased mobility could be the result of events that were partially exogenous to the banks strategy (i.e., mergers and acquisitions or tough competition in the local credit markets). It could also be the result of an active policy by banks to reduce the costs of opportunistic practices by local branch managers. The survey has shown that credit scoring has spread among Italian intermediaries, with a sharp acceleration in recent years that is probably related to the introduction of the new Capital Adequacy Accords (Basel II). The diffusion process was more pronounced among large banks, which were in a position to exploit economies of scale. Credit scoring techniques mostly process balance sheet data, which are historically the most frequently used element for evaluating creditworthiness. Larger banks generally use internally developed models and place great emphasis on qualitative data, such as borrowers corporate governance and organization, and the project to be financed (for large firms especially). Although scoring techniques play a central role in the decision to grant credit, they are not frequently used to determine the terms and conditions of loans. The scores generated by the application of these techniques appear more stringent for large banks than for smaller ones. Overall, the results suggest that the new techniques, which banks are still in the process of adapting, flank but do not entirely replace the previous evaluation processes and data sources.

The organization of lending and the use of credit scoring: the main issues
Banks usually rely on a mix of data sources to assess firms creditworthiness. Some information is easy to classify and transmit from a distance (hard information), whereas other types of information are acquired through personal contacts and comprise qualitative elements that are difficult to communicate to people other than those who collected them

We use this term to denote all of the automated techniques for assessing creditworthiness, which are described later in the paper.

The Capco Institute Journal of Financial Transformation


The Organization of Lending and the Use of Credit Scoring Techniques in Italian Banks

(soft information). It is generally thought that qualitative data play a greater role in the evaluation of start-ups and small businesses, which are prone to having more opaque information (for example, owing to fewer years of experience) or less reliable processed data and are, in any event, subject to less strict information requirements (such as accounting data) than large corporations. The collection of quantitative and especially qualitative information about small firms is done though the branch manager. It is usually at this level of the bank organization that the first contact is made with the small firm, the assessment of creditworthiness is activated, and the relevant information is transmitted for evaluation at the higher levels of the banking structure. In some instances, the decision of whether or not to grant a loan and the loans terms and conditions are made in complete autonomy by the branch manager. The literature on corporate organization acknowledges that it is possible for the objectives of branch managers to diverge from those of the ultimate control holders. It also emphasizes how specialization in the gathering of data and information asymmetries between the headquarters and the branch network within a complex organization can generate the need for a two-way transfer of information along the hierarchy. The literature on banking shows how the many distinctive elements featuring internal organization (the organization chart, the extent to which decision-making is centralized or decentralized, internal control systems, procedures for communicating between the various organizational levels, etc.) have a decisive influence on the strategies of the branch managers and, through them, on the allocation of credit to small firms. The central role played de facto by branch managers in lending to SMEs has not been given sufficient attention by the literature, in part owing to a lack of data. Some recent contributions have begun to fill this gap. Liberti (2005) and Liberti and Mian (2006) show that as one goes higher up in the hierarchy of the bank organization, and as the customer grows more distant from the decision-making center, qualitative elements will weigh less on loan decisions. This phenomenon shows how qualitative information is in fact collected and stored at the lower hierarchical levels and how its transmission costs increase with geographical and organizational distance (defined as the number of levels involved in the decision-making process). Stein (2002) and Berger et al. (2005) broaden the debate to include the effects that different organizational models can have on the incentives for banks branch managers. In particular, Stein uses a theoretical model to show how a large bank can discourage the acquisition of soft information by a branch manager where data must be communicated along multiple hierarchical layers and the transmission of the data becomes extremely costly. This effect does not occur in the case of small banks, where there is markedly less physical and hierarchical distance between the headquarters and the branch network. As a result,

the branch managers of major banks may have a greater incentive to collect hard data, which are more easily provided by large firms, whereas smaller intermediaries can specialize in the acquisition of qualitative information and in small business lending. For Japan, Uchida et al. (2006) show how branch managers traits are not important for the purposes of accumulating soft information and explain this result with reference to the likelihood that the strong social cohesion of Japanese society reduces the costs of transmitting qualitative information. In an empirical analysis of Italian data, Ferri (1997) shows the positive correlation between branch managers mobility and bank size and explains this result by referring to the greater difficulties that large banks face in limiting the moral hazard stemming from the potential for collusion between branch managers and bank customers. The large physical and organizational distance between the headquarters and branches of major banks increases the costs of monitoring, driving these intermediaries to use the mobility of banking managers as a tool for limiting their opportunities to reap private benefits. By contrast, the geographical and organizational proximity typical of small banks encourages them to maximize the benefits of the stability of local managers while at the same time maintaining monitoring costs at reasonably low levels, given that the top managers may belong to the same local community as the branch manager. Moreover, if a large bank is specialized in lending to medium-sized and large firms that are capable of providing hard data [Stein (2002)], it follows that these banks will be less motivated to keep the local managers at the same branch for a long time. In these cases, mobility can represent a way of furthering local managers careers and of warding off excessive inertia in the administration of local branches. Hertzberg et al. (2007) use data on the turnover of the local bank managers of an Argentinean bank and show that the bank utilizes mobility to persuade these managers to report information on the creditworthiness of borrowers accurately. Scott (2006) shows that for a sample of small U.S. firms, the turnover of local managers increases the likelihood of credit rationing. The literature previously surveyed brings out four key themes related to the characteristics of branch managers activities:

Hierarchical and geographical distance between bank headquarters and branch managers as we have seen, distance can affect the costs of transmitting qualitative information collected at the local level and, accordingly, the incentive to acquire them. It can also determine the cost of monitoring branches at the central level.

Decision-making autonomy of the branch manager this variable undoubtedly enhances the incentives for a branch manager to acquire soft information, but at the same time, it increases the costs of control and coordination at the central level.

Tenure of the branch manager a trade-off similar to the one described earlier can also be generated: the higher stability of a 145

branch managers position may lead to more incentives to acquire soft information, but the costs of control can also increase (due, for example, to the moral hazard).

These technologies can allow, even before the formulation of any final judgment, discretionary interventions by one or more persons to assess qualitative elements not explicitly considered in the model. The degree of flexibility in the use of credit scoring, therefore, varies depending both on the characteristics of the procedures adopted and on their importance in lending decisions and customer relationships. In this report, the term scoring refers to all of these instruments indiscriminately. What follows is a brief survey of the main questions related to the adoption and use of credit scoring techniques, which will be described more fully below:

Incentives economic incentives for branch managers can help reduce the moral hazard by aligning the objectives of peripheral agents with those of the bank central management, with the danger, however, of transferring excessive risk to the branch manager.

The role played by technological innovation in the development of procedures for granting loans adopted by banks from the 1990s onward was recalled earlier. One of the most important consequences of ITC advances in the banking industry was the sharp reduction in data processing costs, that is, the use of data for administrative purposes at various organizational levels. The new regulations on minimum capital requirements also provided strong incentives for the adoption of statistical techniques for measuring credit risk; the methods vary, but their distinctive feature consists of their ability to group customers within a finite number of categories, associating with each one a synthetic indicator that expresses the probability of default and accordingly the degree of risk. The introduction of these techniques can influence the role of branch managers in the allocation of credit in various ways. Indeed, credit scoring can represent an alternative means of assessing creditworthiness that contrasts with decision-making processes that emphasize qualitative information and the close interaction of branch managers with customers. At the same time, the adoption of scoring techniques allows, at least partly, the transformation of soft information into processed data and facilitates the control of branches. These issues, which are closely interrelated, cannot be dealt with without further exploring the specific nature of credit scoring, including its relatively recent introduction into the Italian banking system. From this perspective, the analysis of credit scoring complements that of organizational variables and central-peripheral relations. Although these techniques have been used since the fifties, their modeling has greatly evolved in the last decade [Hand and Thomas (2005), Hand and Zhou (2009)]. At the very beginning, these models were aimed at supporting accept-or-reject decisions; nowadays markets are more mature, emphasis is moving from acquisition to retention, and the models tend to back a wide range of business objectives. Models are expected to drive the optimal decision in terms of price and credit conditions. Furthermore, models are increasingly able to go beyond the individual measure of risk by recognizing the portfolio dimension. Quantitative methods include both credit scoring models, which distinguish cases of expected default from non-default using statistical techniques such as discriminant analysis or logit and probit analysis, and internal rating systems, which more or less automatically map individual borrowers (or, in the most sophisticated cases, the different lines of 146 credit of each borrower) on a scale of judgments [Allen et al. (2004)].

Banks characteristics and the adoption of credit scoring techniques in Europe, the introduction of credit scoring was more gradual than in the U.S. and occurred later [Degryse and Ongena (2004)]. Recently, however, credit scoring has been widely adopted by Italian banks. The literature concurs in emphasizing how the adoption of credit scoring techniques is influenced both by the size of banks and by their organization. Size acts in the usual ways; larger banks have more resources to invest in new techniques, and the cost of investments is then distributed among a broader loan portfolio. Moreover, large banks report greater diseconomies owing to distance, including difficulties in transmitting soft information internally, in selecting and monitoring loans, and in designing the right incentives for local managers. The adoption of credit scoring techniques reduces the costs of screening and monitoring firm activity and controlling branch managers, which mitigates the problems of monitoring from a distance [Berger et al. (2005)].

The characteristics of the scoring techniques scoring techniques can differ both in their origin (developed internally or acquired externally) and in the datasets that are processed. The use of methods developed internally by banks implies greater control and more flexibility, in the sense that these methods are easier to modify if the bank no longer considers them adequate. Moreover, broad recourse by banks to externally developed techniques could lead to greater homogeneity in the criteria for assessing creditworthiness by the various intermediaries. These techniques mostly rely on quantitative data inputs to formulate scores. As mentioned earlier, there is also concern about the possible decline in the importance of soft information in lending decisions, with the result that start-ups and smaller firms, relying more on bank credit, could be adversely affected by an intense and rigid use of these methods [Cowan and Cowan (2006)]. However, the empirical evidence to date, above all that concerning the U.S., does not appear to justify these fears [Berger et al. (2005)].

The importance of scoring techniques and how they are used although scoring techniques have by now been widely adopted by banks, their importance in assessing customers creditworthiness is not a foregone conclusion. This issue is by no means secondary, as the impact of these techniques on the extension of credit varies depending on whether banks use them as the main instrument for

The Capco Institute Journal of Financial Transformation


The Organization of Lending and the Use of Credit Scoring Techniques in Italian Banks

assessing creditworthiness or as a supplementary instrument along with other evaluation techniques [Berger et al. (2005)]. Based on the results of a recent survey conducted on a sample of U.S. banks, the scores for small firms are considered to be less important than the traditional indicators of creditworthiness, such as cash flow and available collateral [Cowan and Cowan (2006)]. In the next section, the distinction between intermediaries is based on size and institutional set up. Given that information asymmetries, agency problems, data transmission costs, and economies of scale linked to the use of IT become more complex when we move from individual banks to groups, the survey also distinguishes between stand-alone banks and members of a group.

Class of bank Medium-sized/large Small group-member Small stand-alone Mutual Total

Northwest 17 27 10 54 108

Northeast 9 21 9 57 96

Center 7 24 10 40 81

South/ Islands 4 20 3 10 37

Total 37 92 32 161 322

Table 1 Sample composition (number of banks)

The final sample included 333 banks and 322 responses. The missing answers all refer to banks whose business lending was marginal or nil. The composition of the sample by geographical area and size is shown in Table 1. The sample accounts for 82.8 percent of the total amount of outstanding loans to non-financial firms in 2007. Coverage is high throughout the country, ranging from 81.0 percent in the regions of the Northwest to 90.4 percent in the South; among the size/institutional classes, this percentage is reduced for small stand-alone banks (34.7 per cent), a category that includes several branches of foreign banks that were not surveyed. The average large bank in the sample has 519 branches and more than 5,600 employees. It is about 10 times bigger than the average small bank belonging to a group, 20 times that of a small stand-alone bank, and 50 times that of a mutual bank. The questionnaire contains mostly qualitative questions, and the data have accordingly been aggregated in two separate ways: either as simple frequencies of responses or as weighted frequencies where the weights are equal to loans to SMEs or large firms, depending on which type of borrower the question refers to. The significance assigned in the first case is that of the diffusion of the phenomena (for example, what share of the banks use credit scoring), and that assigned in the second case is that of the likelihood of the borrower encountering a given phenomenon (using the same example, the share of lending to SMEs, or large firms granted by banks using scoring techniques).

The survey
Our data are taken from a qualitative questionnaire (reproduced in the Appendix). The purpose of the survey is to gather information about the organizational aspects and scoring techniques used in the lending process. The questions capture banks organizational structure, meaning both their specialization by customer segment and the number of hierarchical layers involved in the decision to lend to a firm. For each hierarchical level, the survey establishes the degree of autonomy granted to approve loan applications and possibly to establish their terms and conditions. Finally, the existence and type of economic incentives for branch managers and the length of their tenure are also considered. In the second part of the questionnaire, the questions explore the adoption of statistical-quantitative techniques for evaluating firms and their use in setting terms and conditions of loans as well as in the monitoring of the loans. Next, the characteristics of the models are surveyed, particularly the data used to calculate the scores and the role of quantitative and qualitative information in evaluating new loan applicants. The questionnaire was submitted to intermediaries in 2007 through the Bank of Italys branch network. The selection of the banks was aimed at ensuring adequate coverage, both geographically and by the type of bank (medium-sized and large banks, small stand-alone banks and others belonging to groups, mutual banks). The sample design was based on that used for a database including interest rates (TAXIA), which, until the end of 2006, surveyed 215 reporting institutions selected by their size (measured by total lending to customers), geographical location, and share of loans reported to the Central Credit Register. The original sample of 215 banks was modified in two ways. First, intermediaries specializing in activities such as factoring, leasing, household mortgages, and credit recovery were excluded, given that the questionnaire focused on lending to firms in traditional technical formats. Second, it was left to local data collectors to submit the questionnaire to banks excluded from the TAXIA sample, provided that the banks did not belong to the minor size category.

Banks organization, SME lending, and the role of the branch manager
Distance from headquarters and banking presence in local credit markets
Some recent literature has focused on the effects that the distance between a bank and a firm can have on access to credit and its terms and conditions. This literature often treats the bank as a single entity, but if instead, as noted above, we recognize that the bank has an articulated internal structure, the question can be approached from two different standpoints, i.e., one referring to the distance between the borrower firm and the unit within the bank charged with lending decisions and the other referring to the distance between that unit and the head office, where the people who exercise control powers are located. This relationship between center and branch may be influenced by agency problems that 147

may be just as severe as those traditionally considered in bank-firm relations. Apart from the costs of monitoring the branch managers activity, distance can also increase the costs of transmitting qualitative information, lessen the incentives for the collection of this information, and increase the effort required to pass on best practices.3 When instead of geographical distance there is hierarchical distance, measured by the number of organizational levels along the chain of command involved in the decision, the implications are similar. In what follows, we describe several indicators of the distance between the center and periphery for the banks in our sample, the number of local markets in which they do business, and how these factors evolved between 2000 and 2006. To compute distance, we must first define the relevant geographical units and devise a suitable gauge. To this end, we define the location of the persons who ultimately control the bank as the city in which the banks legal head office is established and that of the branch manager as the locality of the branch, itself defined as the main city within each of the local labor market areas mapped by Istat in 2001. This procedure effectively balances the need for a geographical classification detailed enough to allow for precise measurement of the distance between head office and branch with the need to capture the differences between the market areas where the head office and the branches are located.4 Once the local labor market area where the banks legal office is located is determined, we calculate the distance between that area and all of the other local labor markets where the bank has at least one branch.5 The distances so obtained are weighted according to the local labor markets share in the banks total lending. Weighting is necessary to prevent branches in far-off areas that do not account for a significant portion of the banks assets from having a disproportionate effect on the average distance. The results are given in Table 2. The mean for this indicator in 2006 is 47 kilometers, and the median is 21 kilometers. The larger banks have a more extensive branch network and accordingly higher average distances than medium-sized, small, and mutual banks. Statistical tests on these averages indicate that the differences between classes of banks are significantly different from zero. There are also significant inter-category differences in the number of local labor systems in which the bank has at least one branch.
4 3 Class of bank Medium-sized/large Small group-member Small stand-alone Mutual Total Mean 152 50 26 15 42

2000 Median 116 30 21 14 19 Difference2 126 99 67 64 121 Mean 151 66 31 16 47

2006 Median 117 39 22 14 21 Difference2 136 100 69 68 132

Source: Sample survey of 322 banks. 1 - Banks present in sample both in 2000 and in 2006. Distance is mean distance in kilometers of head office from the local labor systems in which the bank is present, weighted by the banks lending in that local labor system. 2 - Interquartile difference over median, in percent.

Table 2 Distance head office-branch1 (kilometers, percentages)

are significantly different from zero for small and mutual banks but not for the large banks. The average number of local labor market areas in which banks had branches also increased over this period. In general, the increase in distance mainly involved medium-sized banks, especially for banks belonging to groups, and, to a lesser extent, small non-group banks and mutual banks. In part, this pattern occurred because some large banks involved in mergers reduced the number of branches and of local labor systems in which they were present, shortening their range of operations. Even excluding these banks, however, the increase of the average distance for the large banks is small and less pronounced than that recorded for the other size categories. It is also interesting to examine the evolution of headquarters-branch distances for banking groups, considering all of the local labor market areas with at least one branch and computing the distance from the parent banks head office. For such an evaluation, we exclude groups that have only one bank and any branches or subsidiaries abroad. For the 33 banking groups so identified, the mean distance increases from 180 to

The average distance is highly variable. The inter-quartile difference is equal to 132 percent of the median for all banks and is also high even within each of the size categories. Between 2000 and 2006, there was a broad increase in the distance between the local labor system of the head office and the branch. The median increased for all categories of bank (except for mutual banks, for which it held constant), but the mean diminished slightly among large banks while increasing for small and mu148 tual banks. Statistical tests on the averages indicate that the differences
5

Berger and De Young (2001) have shown that a bank holding companys ability to transfer its own efficiency to the other banks in a group diminishes as the distance between the former and latter increases. Deng and Elyasiani (2005) have found that the risk of the holding company increases with the distance from its subsidiaries and interpret this in terms of greater ability of the local manager to engage in opportunistic behavior when operating in far-away markets. Some of the foregoing observations on the relationship between the head office and local branches and the related agency problems might suggest the adoption of a continuous space and the definition of distance simply as the distance in kilometers between the head office and the branch. However, other aspects of the relationship are better defined by reference to the distance between the local markets of the two units. The first method would give a better gauge of the physical distance; the second highlights the distance and thus the differences between the broader areas within which the two units are located. The local labor systems are identified by the geographical coordinates of their main municipality; the distance is calculated by the great circle formula, assuming that the earth is spherical. Another assumption concerns the distance to branches located within the same local labor system as the head office. This distance could be set to zero or otherwise calculated on the basis of the land area of that system. The results do not change when the distance is set to zero.

The Capco Institute Journal of Financial Transformation


The Organization of Lending and the Use of Credit Scoring Techniques in Italian Banks

217 kilometers, the median increases from 131 to 241 kilometers, and the average number of local labor markets increases from 107 to 142. The foregoing suggests several considerations. First, the variability in banks size is reflected in huge differences in the extent of branch networks and in the distance between the center and peripheral branches. However, even within the various size and institutional classes the differences remain, which means that the problem of the cost of transmission of qualitative information, and of agency problems in general, between branch managers and head offices differs in extent from bank to bank, as do the organizational structures adopted to deal with them (see below). Second, between 2000 and 2006, there was a general lengthening of distances, owing in part to the introduction of information and communications technology, which greatly reduced the cost of acquiring, processing, and transmitting information [Berger (2003)].6 Third, the increase in distance was greater among small banks belonging to groups, among small non-group banks and, to some extent, among mutual banks. Large banks held the average distance of branches from the head office roughly unchanged; their overall range of action was increased through their group structure. This is in contradiction with some recent works showing that small and minor banks have grown more than large banks [Bonaccorsi di Patti et al. (2005)].

50 45 40 35 30 25 20 15 10 5 0

10

Medium-sized/large banks Small stand-alone banks

Small group-member banks Mutual banks

Source: Sample survey of 322 banks. 1 - The number of layers is the number of hierarchically ranked positions from branch manager through chief executive officer. The horizontal axis gives the number of layers and the vertical axis gives the percentage shares of the number of layers for each class of institution.

Figure 1 Distribution of number of layers in hierarchy1 (numbers and percentages)

10 percent of the large banks. Typically, large banks have a greater differentiation of products, markets, and customers. Accordingly, a divisional organization can exploit specialization by assigning a single manager responsible for a given product or geographical area. A divisional structure also makes it easier to adapt to the industrial or local context. The large banks, thanks to potential scale of economy, can more readily sustain the costs of divisional organization, which entails more staff and structures engaged in similar activities. Among other potential costs, there is the need for a closer coordination of relatively autonomous, diversified units. Layers of hierarchy the length of the chain of command is given by the number of ranks between the branch manager and the CEO.7 These ranks are hierarchically ordered, and each has specific powers in terms of maximum loan amounts. In what follows, we consider the positions involved in lending to SMEs.8 The average number of layers varies significantly by institutional category: 5 for large banks, 4 for small group-member banks, 3.6 for small stand-alone banks, and 2.8 for mutual banks. Figure 1 shows the distribution of layers of hierarchy according to this classification.

Organizational structure and layers of hierarchy


The previous section considered the distance between the head office and branches in geographical terms. Now we consider it in terms of organizational structure, measuring the distance between the branch manager and the top management of the bank by the organizational model adopted and the number of layers of management between them. The model may have a divisional structure, in which customers are segmented by size, or a single structure that performs all lending activities. In a divisional structure, the middle management and/or the type of peripheral unit varies with the size of the borrower firm. Along with the modification of organizational roles, the delegation of powers, and the way in which lending is planned and carried out, also change. The number of layers of hierarchy is used to gauge the depth of the organizational structure. The model of organization the responses to the questionnaire indicate that over 70 percent of the large banks are organized by divisions, with customers segmented by size and typically divided into SMEs and large firms. For small banks belonging to groups, this percentage falls to 33 percent, and for small stand-alone banks, it falls to 24 percent; for mutual banks, it is 10 percent. As a rule, the variable used for segmenting firms is sales, although in a significant number of cases, small banks use loan size. The modal sales threshold dividing small from large firms is 2.5 million. Among small banks, organization by divisions is a very recent phenomenon almost 50 percent of the small and mutual banks with this model adopted it in the last three years, compared with only about

Felici and Pagnini (2008) show that, with other things (including profitability) being equal, the banks with larger ICT endowments increase their capacity to move into markets far away from their existing localities. We consider only the ranks that correspond to a significant rise in hierarchical level. This excludes, for instance, deputies and auxiliary staff but includes all the grades assigned to run an organizational unit. We do not comment on the data on layers of hierarchy involved in lending to large firms or the degree of decision-making autonomy enjoyed by branch managers in such lending. This decision follows from the limited role of branch managers in this segment. Moreover, few banks have different organizational structures for lending to SMEs and to large firms.

149

As bank size increases, the average number of hierarchical layers and the variance of their distribution both increase. The large banks are those with the greatest diversity in the length of the chain of command. One in five has a simplified organization (with fewer than three layers), whereas one in three has a highly complex structure (more than six layers). For mutual banks, more than 40 percent have an elementary structure with just two levels: branch manager and CEO. However, even in this class, there are more complex organizations; one in five has four or more layers. The number of layers of hierarchy has a significant impact on a banks operation. On the one hand, more layers may mean higher costs for the transmission of information from one level to another and longer decision times. On the other hand, a flatter organizational chart for a given size of staff implies larger units and thus a larger area to control, i.e., a larger number of subordinates under each supervisor.
Class of bank Medium-sized/large Small group-member Small stand-alone Mutual Total

Autonomous decision-making power of branch manager, thousands of euros1 Mean 546 202 92 53 154 Median 250 125 80 30 75 Difference3 146 118 125 217 173

Index of relative power delegated2 Mean 5.3 11.0 13.7 19.2 14.7 Median 3.1 8.6 14.3 16.7 10.8 Difference3 159 119 130 122 177

Source: Sample survey of 322 banks. 1 - Banks were asked: Consider the granting of loans to non-financial companies applying to your bank for the first time and which, based on the information available, have no solvency problems. What is the maximum amount of credit (in thousands of euros) that can be autonomously granted by .... The figure represents the power delegated to the branch manager or head of local unit. Lending to SMEs. 2 - The relative index is the amount of power delegated to the branch manager normalized with respect to that of the CEO. 3 - Interquartile difference over median, in percent.

Branch managers decision power


The branch manager obviously has a privileged position for acquiring information relevant to loan decisions. However, the formal authority, i.e., the right of control, belongs to the top management of the bank. The organizational problems posed by the lack of coincidence of these two figures can be addressed by transferring the information to the person endowed with the formal authority (central decision-making) or by assigning the power to decide to the person who has obtained the information (decentralized decision-making).9 One solution creates problems of information transmission, whereas the other creates problems of control.10 Between these two extremes, there is a continuum of degrees of decentralization to resolve the trade off. The literature on corporate organization has dealt with the problem of measuring the degree of decentralization of the power of decision from various standpoints.11 In this survey, the banks were asked to indicate, for every hierarchical level, the maximum amount of credit that could be granted on that levels own authority. This information was used to construct an indicator of the degree of decentralization. This section also refers to lending to SMEs.12 The amount of credit that the branch manager can grant on his own power increases with the size of the bank (Table 3). The mean is 550,000 (and the median 250,000) for large banks, 200,000 for small banks belonging to groups, 90,000 for small stand-alone banks, and 50,000 for mutual banks. However, these means conceal some variability within the subgroups, as shown by the interquartile difference as a ratio to the median. The greatest variability is found among mutual banks; in onefifth of them, the value is 0, and, in one-fifth, it is more than 100,000. The amount of credit that the branch manager can extend autonomously is larger for mortgage loans (124,000) and smaller for uncollateralized overdraft facilities (62,000) or unsecured loans (54,000). These differ150 ences reflect the role of collateral.
9

Table 3 Delegation of powers (thousands of euros and percent)

Comparing the loan authorization power of the branch manager with that of the top management, we can construct an indicator to measure the degree of decentralization. The branch manager (or head of the local unit of the bank) and the CEO are the two figures that appear in virtually all of the organizational charts. Comparing the powers delegated to them, we can build an index of the branch managers autonomy with respect to the powers of the CEO. This index equals 5 percent in large banks, 11 percent in small banks belonging to groups, 14 percent in small stand-alone banks, and nearly 20 percent in mutual banks. The index is negatively correlated with bank size because as the bank becomes larger, the powers of the CEO increase more than proportionately with respect to those of the branch manager. The CEO performs different functions depending on the type of bank. In mutual banks, for example, the most important

See Christie et al. (2003). The distinction between real and formal authority is from Aghion and Tirole (1997). 10 The complete centralization of the power of decision in the hands of top management could lead to organizational failures as a result of the branch managers lack of incentive to acquire information. Also, the transmission of information from lower to higher levels may entail a loss of information or at any rate a lag between information acquisition and decision. Finally, if agents are rationally constrained la Simon, the banks top management might not be capable of handling a large information flow. Decentralization, on the other hand, allows the power of decision to be in the hands of the person with the information but leads to top managements loss of control over that persons choices. The costs of delegating formal authority are defined as agency costs and depend on the fact that the aims pursued by the bank as such do not necessarily coincide with the personal objectives of the staff. Agency problems typically involve collusion between the branch manager and the borrower firm or the manipulation of the data that the branch manager has gathered. 11 Christie et al. (2003) and Acemoglu et al. (2006) identify the autonomy of decision with the presence of profit centers within the corporation. This indicator reflects the observation that a cost center controls either revenue or costs but not both, whereas a profit center makes decisions involving both costs and revenue. The degree of decentralization is thus 0 when the level immediately below the CEO is a cost center and 1 when it is a profit center. 12 The questionnaire also had a question about the autonomy of branch managers to price loans; however, owing to the number of non-responses, we have elected not to comment on these data.

The Capco Institute Journal of Financial Transformation


The Organization of Lending and the Use of Credit Scoring Techniques in Italian Banks

Tenure in months1 Class of bank Medium-sized/large Small group-member Small stand-alone Mutual Total Mean 32 38 49 50 45

Trend in the last three years1

90 80 70 60 50 40 30 20 10 0 Medium-sized/large banks Small stand-alone banks Mutual banks

Median Difference2 Shortened Unchanged Lengthened 32 36 38 48 38 33 50 63 50 71 39.4 36.5 26.7 41.8 38.5 45.5 54.0 60.0 42.5 47.7 15.1 9.5 13.3 15.7 13.8

Source: Sample survey of 322 banks. 1 - Respondents were asked: Indicate the average length of tenure of branch managers, in months (even an estimate). In the last three years has it lengthened, shortened or remained unchanged? 2 - Interquartile difference over median, in percent.

Small group-member banks

Table 4 Average tenure of branch managers (absolute values and percentages)

Source: Sample survey of 322 banks. 1 - Percentage of banks that describe at least one of the factors shown in Table 5 as very important for the compensation of branch managers. The horizontal line represents the overall sample mean.

decisions on loans to SMEs are taken directly by the banks board or council, whereas in large banks, they are taken lower down in the chain of command. Rather than compare such drastically different types of banks, again it is more meaningful to observe the variability within each category. The ratio of the interquartile difference to the median again shows great variability. The evidence presented so far refers to the year 2006. The questionnaire also asked about the trend in the past three years. Half of the banks reported a tendency toward greater decentralization, whereas just 4 percent declared that they had centralized their decision-making powers. This tendency characterized all the banks but was most pronounced for the large ones (about three quarters of which reported greater decentralization). Several conclusions follow. First, the size of the bank is a major determinant of the delegation of powers, in absolute value, to the branch manager. Some large banks, which tend to be farther away from the local credit markets and the borrower firms, could endow their branches with considerable autonomy with a view to creating streamlined structures that are relatively immune to the inertia and lack of dynamism typical of large corporations and closer to the local community. The amount of lending authority may also be affected by the type of customer served. Banks with larger customers tend to have higher ceilings on their local units lending powers. Second, decentralization is more pronounced in small banks than in large banks. A greater geographical proximity of local to central offices and less complex lending business may foster the decentralization of powers of decision, thanks among other things to a greater ease of control of top management over local managers. However, by itself, bank size does not explain the variability in the degree of decentralization. Within each of our size classes, the observed variance would appear to indicate significant variety in organizational arrangements. The responses further reveal a general tendency towards decentralization. Together with the recent adoption of divisional models,

Figure 2 Use of economic incentives for branch managers1

this tendency highlights a certain organizational dynamism, presumably in connection with an increasing geographical distance between a head office and branches and with the diffusion of ICT. Finally, the greater decentralization among large banks may be a response to the competition of smaller institutions in lending to SMEs in local credit markets.

Branch managers tenure


The tenure of branch managers presents the bank with a trade off. Greater stability in the position facilitates contacts and familiarity with the local market and hence the acquisition of soft information. However, it also heightens the informational asymmetry between a branch manager and the head office, possibly enabling the former to reap private benefits (by collusion with local borrowers, say, or manipulation of the information transmitted up the chain of command). The survey found that the mean time for which branch managers held their position was nearly four years, with a median of 38 months and a mode of 36 months (Table 4). These figures are similar to those found in a similar survey by Ferri (1997). The mode (three years) could depend on corporate routines and widespread organizational models shared by many banks. The term of office is shorter in the larger banks and longer in small stand-alone banks and mutual banks. The standard statistical tests show that all of these differences are significantly different from zero. Small banks belonging to banking groups have values similar to large banks, suggesting that group organizational strategies extend to smaller intermediaries. The degree of mobility of branch managers shows considerable variability not only for the entire sample but also within each category of banks. Regarding trends in tenure, nearly 40 percent of the sample banks reported that tenure had shortened in the last three years, whereas 14 percent reported a lengthening. The tendency towards greater mobility was also 151

broadly uniform within bank classes. The heightened mobility of branch managers may be related to the introduction of ICT, which in practice may have reduced the rents deriving from close local bank-firm relations. It may also have led to the increase in head-branch distance, which in turn presumably increases the costs of monitoring local managers activities; to mergers and corporate restructuring, which have affected a large number of banks since the 1990s; and to heightened competition in credit markets, leading in turn to stiffer competition in the local bank managers markets.
42,2 59,4 Small stand-alone banks 46,9 SMEs Large corporates 64,1 Small group-member banks 56,5 Medium-sized and large banks 83,8 97,3

Branch managers incentives


The potential costs connected with the distance between the head office and local units and with the misalignment between the personal objectives of local agents and those of the bank as such can be mitigated by incentive systems linking agents compensation to results [Milgrom and Roberts (1992)]. However, incentive systems can also entail an excessive transfer of risk to the local agent.

Mutual banks

20

40

60

80

100

120

Source: Sample survey of 322 banks

Figure 3 Diffusion of credit scoring (percentages; unweighted frequencies)

Regarding the factors on which incentives are based, the most comIncentives are most common in the larger banks (Figure 2), with 83 percent of the large banks in our sample stating that the use of incentives in connection with lending for branch managers compensation is very important compared with an overall sample mean of 57 percent. Mutual banks make the least use of incentives (46 percent). This may be explained by the higher incidence of agency costs in larger banks, i.e., a greater geographical and organizational distance between the center and the periphery leads to a lower ability to monitor local agents activity. Consequently, incentives should help to align the branch managers goals with the banks objectives.
Mediumsized/ large banks Growth in lending Bad debt and/or impaired loan rate Change in bad debts and/or impaired loans Net earnings on loan portfolio Overall profitability of unit (i.e., gross income) Average potential riskiness of loan portfolio 28.6 5.7 8.6 25.7 88.6 11.4 Small groupmember banks 19.7 16.4 21.3 14.7 90.2 18.0 Small standalone banks 33.3 37.5 58.3 29.2 62.5 20.8

mon response was the overall profitability of the local unit (i.e., gross income). This factor was especially important for large banks and small group-member banks (Table 5). Practically nine-tenths of all banks using incentives stated that the overall profitability of the branch was a very significant factor. By contrast, small stand-alone banks and mutual banks are much more sensitive to bad loan ratios or to variations in bad loans. To simplify, larger banks and those belonging to groups tend to link local managers incentives to the profitability of the branch and of its loan portfolio, whereas other banks, including mutual banks, tend to stress the containment of bad loans and limitation of credit risk.
Mediumsized/ large banks Growth in lending Bad debt and/or impaired loan rate Change in bad debts and/or impaired loans Net earnings on loan portfolio Overall profitability of unit (e.g., gross income) Average potential riskiness of loan portfolio 28.1 34.6 39.3 23.1 26.5 48.3 Small groupmember banks 25.0 33.3 42.6 19.2 48.3 33.3 Small standalone banks 13.6 31.6 36.8 35.0 36.4 33.3

Mutual banks 36.1 53.0 49.4 19.3 60.2 32.5

Total banks 29.6 32.0 35.0 20.2 74.4 23.2

Mutual banks 5.2 48.6 48.0 20.9 41.0 43.6

Total banks 15.9 40.1 43.7 22.5 40.1 40.3

Source: Sample survey of 322 banks. 1 - Percentage of banks that consider each factor very important for determining branch managers compensation. The choices were very important, fairly important, not very important and not important at all. Sample limited to banks using incentives linked to the factors specified.

Source: Sample survey of 322 banks. 1 - Percentage of banks reporting an increase in the last three years in the importance of the factors indicated in branch managers compensation. The possible answers were increased, essentially unchanged, decreased and not relevant. Sample limited to banks indicating a tendency.

152

Table 5 Factors considered in determining incentives for branch managers1 (percentages)

Table 6 Trend in use of incentives for branch managers1 (percentages)

The Capco Institute Journal of Financial Transformation


The Organization of Lending and the Use of Credit Scoring Techniques in Italian Banks

SMEs
100 90 Medium-sized and large banks 80 70 60 50 40 30 20 10 0 2000 2001 2002 2003 2004 2005 2006 Small group-member banks Small stand-alone banks Mutual banks 80 Small group-member banks 70 60 50 40 30 20 10 0 2000 2001 2002 Small stand-alone banks 100 90 Medium-sized and large banks

Large corporates

2003

2004

2005

2006

Source: Sample survey of 322 banks

Figure 4 Introduction of credit scoring (percentages; unweighted frequencies)

In the last three years, the banks have made greater use of incentive schemes for branch managers compensation (Table 6). The factors whose relative importance has increased have been gains in the overall profitability of the branch and the containment of bad loans. The share of banks reporting an increase in the relevance attached to these factors was more than 40 percent compared to fewer than 5 percent that reported a reduction. Again, the mutual banks paid the closest attention to the incidence and variation of bad loans and substandard loan assets.

to reduce selection costs for smaller firms, even where the bank is less specialized in this segment, thus freeing resources for screening in the core business segment. Italian banks began to adopt the scoring techniques substantially in 2003, and their introduction has been by degrees, accelerating sharply in recent years. In 2000, less than 10 percent of the banks had such techniques in place, whereas in 2003, almost 25 percent did, and in 2006, more than half did (Figure 4).

The importance of credit scoring techniques in assessing creditworthiness


The diffusion of credit scoring techniques
The plunging cost of data processing in recent years has fostered banks to introduce statistical techniques for measuring credit risk, supplementing their external and internal sources of information. Our survey of the Italian banking system shows the diffusion of credit scoring techniques for business lending. Consistent with our hypotheses in the introductory section, and as other studies have shown [Bofondi and Lotti (2005)], these techniques have been mainly adopted by larger banks with extensive branch networks that can exploit economies of scale. At the end of 2006, 57 percent of the sample banks had scoring techniques in place to assess the creditworthiness of firms, whether large or small. However, the distribution was not uniform by the type of intermediary; diffusion reached 97 percent for medium-sized and large banks, 64 percent for small banks belonging to groups, 59 percent for small standalone banks, and just over 40 percent for mutual banks (Figure 3). Moreover, scoring techniques were systematically more common for lending to SMEs. For large banks, the difference was more than 13 percentage points (Figure 4), i.e., the prevailing tendency is to use these techniques

The spread of scoring techniques in recent years is presumably related to the new capital adequacy accords (Basel II), which link capital requirements more directly to customer risk with incentives for a more accurate evaluation of the quality of the loan portfolio.13 The credit scoring techniques adopted by Italian banks differ both in their origin (internal/external) and in the data used. At the end of 2006, more than 50 percent of the banks that had actively participated in the development and introduction of a credit scoring methodology, either independently or in cooperation with other institutions or consortia (Table 7). The degree of participation is correlated with bank size. Nearly all of the larger banks had contributed

13 After classifying their retail customers by risk under the internal ratings approach, banks must estimate the main risk components for each class and then calculate, by the Basel Committees methodology, specific capital requirements for each. Consequently, the introduction of credit scoring may be highly advantageous to the banks, insofar as it can lower capital charges [Jankowitsch et al. (2007)]. So far, very few banks, and only large ones, have begun to adopt these methodologies for calculating capital requirements. To do so, banks must meet stringent qualitative and quantitative requirements that are subject to a complex process of supervisory validation. However, the possibility that the internal models may be validated and recognized for supervisory purposes has nevertheless fostered their diffusion [Bank for International Settlements (2005)] by stimulating studies on the methodologies by specialized companies and by creating incentives for initiatives by consortiums of institutions.

153

actively to the development of the scoring method; the contribution from small banks was less common.
Bank class and firm size

Internal

In collaboration Purchase Purchase with other from group from outside institutions company company Lending to SMEs

Other

The data used for credit scoring


One of the benefits derived from credit scoring involves the management of the data available to banks. Banks can now fully exploit this information, integrating and combining data for systematic, replicable use. However, accurate data extending over a suitably long period are indispensable to the reliability of the models forecasts. The new techniques also impose a standardization of the documentation required for loan applications, which, among other things, facilitates subsequent securitization. The adaptation of internal information systems originally designed for different purposes is one of the most serious problems and has slowed the introduction of the new techniques.14 Consequently, the models focus on the factors that have traditionally been used to assess creditworthiness (firms financial statements and credit history), whereas other data, both from external sources and available within the banking group, are used less frequently (Table 8). The question on this matter was phrased in ordinal terms, asking respondents to rank the various sources of information by importance. Table 9 reconstructs the ranking based on the frequency of the answers very important and decisive. There are significant differences in the weights of the factors used in credit scoring for SMEs. For mutual banks (and for the small banks), the most important factor is the financial statement, followed by the credit history with the bank and with the rest of the banking system. The larger banks, by contrast, assign greater importance to the firms past credit performance than to its accounting data (Table 9). Less importance is attached to the firms economic sector and geographical area, which in fact are not even considered in many cases (about a third of the models, accounting for 18 percent of loans). Other external data sources, including the interbank register of bad checks and payment cards and the Chamber of Commerces database, are of little importance and are often not used at all. Large banks and mutual banks are the ones that most commonly ignore them (31 and 26 percent, respectively, corresponding to 38 and 19 percent of loans). Qualitative information is generally included in the estimation models, although with relatively modest weight in the banks overall assessment. Finally, large banks also consider any relations between the firm and other members of the banks group; even so, about 40 percent of the large banks do not include this information in their models. In evaluating the creditworthiness of large firms, the importance of the various sources of information used by the larger banks is similar to that used for small firms, but there are some differences in the relative ranking. In the former case, greater importance is attached to the company accounts and especially to qualitative information generally relating to 154 organizational structure, the stature of the management, and the quality
Medium-sized/ large Small groupmember Small stand-alone Mutual 46.8 24.4 21.0 11.9

47.6 24.6 35.8 40.5

3.3 22.5 0.2 18.3

2.4 26.0 41.4 24.6

0.0 0.8 1.5 4.7

Lending to large firms Medium-sized/ large Small groupmember Small stand-alone 52.6 18.6 33.6 38.0 21.7 35.5 6.1 19.6 0.6 3.3 25.7 30.3 0.0 14.4 0.0

Source: Sample survey of 322 banks. 1 - Percentage frequency of responses, weighted by volume of lending to SMEs and to large firms, respectively.

Table 7 How scoring models were developed1 (percentages)

of the investment project to be financed (Table 8). Overall, the models appear to be more flexible, as they allow for more judgmental components. For large firms, more attention is paid to the state of the borrowers existing relations with the other members of the banks group, although a substantial 35 percent of large banks do not consider this factor.

The importance of credit scoring techniques in assessing creditworthiness


Statistical scoring techniques have gained considerable importance in the lending process, in particular in the decision of whether or not to grant a loan. In most cases the score is decisive or very important (Table 10 and Figure 5). There are significant differences between banking groups, and the relative importance of quantitative techniques is definitely greater among the larger banks and decreases with the size of the institutions. Generally, the rating/scoring influences the size of the loan, and for the smaller firms, it also affects the amount of collateral requested. Although scoring techniques are widely used, they are still rarely employed to determine interest rates and loan maturities, suggesting the early stage of the evolution of scoring models depicted by Hand and Thomas (2005) and Hand and Zhou (2009). This lack of an immediate impact of the borrowers credit score on the interest rate that the bank charges would appear to indicate that the effect found by the literature

14 The numerous interbank mergers of recent years have also complicated the integration of data from different sources.

The Capco Institute Journal of Financial Transformation


The Organization of Lending and the Use of Credit Scoring Techniques in Italian Banks

SMEs Information used2 Financial statement data Geographical area and economic sector Credit relations with entire system3 Other outside sources of data4 Relations with bank Relations with banks group Qualitative information5 Source: Sample survey of 322 banks. Medium-sized/ large banks 46.5 2.1 85.1 11.8 50.4 3.2 4.7 Small groupmember banks 71.0 1.3 35.6 9.8 63.9 6.9 13.4 Small stand-alone banks 56.1 1.7 75.4 9.8 54.0 0.0 5.5 Mutual banks 82.6 2.4 50.0 9.8 46.9 0.0 3.8

Large firms Medium-sized/ large banks 86.7 0.0 35.0 3.1 27.3 8.7 33.7 Small groupmember banks 77.5 1.2 47.3 7.3 50.0 9.3 8.8 Small stand-alone banks 29.9 5.8 76.1 16.0 47.3 0.0 22.6

1 - Sum of frequencies of responses indicating each source as one of the two most important, weighted by volume of lending to SMEs and large firms, respectively. Data for 2006. 2 - Banks were asked: If you use statistical-quantitative methodologies for assessing firms creditworthiness, please rank by decreasing order of importance the data considered in your calculation engine in assigning the overall score: 1 for the most important, 2 for the next most important, and so on. If you do not use any particular factor, answer NA. 3 - Source: Central Credit Register or other credit bureau. 4 - Interbank register of bad checks and payment cards, Protest bulletin, etc. 5 - Codifiable data, as through special questionnaires, on organization of the firm, characteristics of the project to fund, and so on.

Table 8 Information used in scoring models1 (percentages)

Medium-sized and large banks 1st 2nd 3rd 4th 5th 6th State of credit relations with the banking system State of relations with the bank Income statement and balance sheet Other outside sources Qualitative information Relations with the banking group

Mutual banks Income statement and balance sheet State of credit relations with the banking system State of relations with the bank Other outside sources Qualitative information Area and sector of activity

in other countries, i.e., an expansion of the volume of lending to marginal customers at higher interest rates, has not (yet) emerged in Italy. However, as shown consistently with developments in risk management and control, these methodologies have been widely used to monitor the situation of firms and the state of loans and accounts. The differences in practices concerning small and large firms do not appear to be pronounced, although there is a somewhat greater tendency to use ratings to determine the pricing of loans to the larger firms (Table 10). The banks flexibility in using scoring techniques is highly variable. It depends on the characteristics of the procedures chosen, which may

Source: Sample survey of 322 banks

Table 9 Information sources included in scoring systems (ranking of information sources by importance)

100 Medium-sized and large banks 90 80 70 60 50 40 30 20 10 Small group-member banks Small stand-alone banks Mutual banks

100 90 80 70 60 50 40 30 20 10 0 Medium-sized and large banks

Small group-member banks

Small stand-alone banks

Mutual banks

very important decisive

0 Decision Amount Pricing Maturity Collateral Monitoring

De

ci

si

on Am

ou

nt M on

ito

rin

g De ci

si

on Am

ou

nt M on

ito

rin

g De ci

si

on Am

ou

nt M on

ito

rin

g De ci

si

on Am

ou

nt M on

ito

rin

Source: Sample survey of 322 banks

Source: Sample survey of 322 banks

Figure 5 Importance of scoring in lending to SMEs (frequency of decisive and very important answers, weighted by lending to SMEs)

Figure 6 Flexibility in the use of scoring techniques (percentage share of answers decisive and very important in loan approvals to SMEs, weighted)

155

Bank class and firm size

enable that adaptation of scores to account for elements of information


Lend/not Amount Pricing Maturity Collateral Monitoring

not included in the model, but it also depends on the actual importance of the scores in lending decisions and management, i.e., whether they are the main evaluation tool or a supplement to another method of assessment. The models degree of flexibility was gauged with reference to decisive as one of the possible responses to the importance of scoring methods. Decisive was counted separately from very important specifically to capture the possibility of loan officers derogating from the automatic scores; the answer decisive was interpreted to mean practically no such flexibility. In all cases, the scores were more binding for large banks than for small banks. For decisions on whether to lend to SMEs, credit scoring tools, while important, are decisive for only 18 percent of the entire sample of banks and for a third of the large banks; weighted by volume of loans, the frequencies are higher (Figure 6), confirming that the discretionary power of loan officers tends to diminish as the size of the bank (and of the borrower firm) increases. Further, greater specialization in lending to SMEs corresponds to lower weight assigned to scoring in loan decisions. As we have seen, the likelihood of a bank developing its own scoring system internally, at least in part, increases with bank size. The purchase of a statistical procedure from the outside could reduce the banks control over its own instrument for selecting borrowers, fostering the perception of the system as a black box, both initially in relation to the algorithm and database and then subsequently at times of revision (Frame et al. (2001)]. Our survey shows that in each size class of banks, credit scoring techniques for SMEs are more frequently decisive or very important in lending decisions when they are developed by the bank itself.

Lending to SMEs Medium-sized/ large Small groupmember Small standalone Mutual 91.8 67.0 54.1 47.5 57.0 39.3 31.4 31.8 14.7 15.5 21.4 16.5 15.8 6.7 13.3 17.5 42.9 34.1 34.0 31.9 70.0 73.7 71.0 48.5

Lending to large firms Medium-sized/ large Small groupmember Small standalone 88.0 67.6 50.1 70.2 34.8 33.2 20.3 17.3 32.1 29.0 9.0 14.3 35.6 27.9 32.1 82.6 61.4 81.9

Source: Sample survey of 322 banks. 1 - Banks were asked to: Rank from 1 to 5, in decreasing order of importance. 1=decisive, 2=very important, 3=fairly important, 4=not very important, 5=not important at all. NA=not applicable. The table gives the sum of the frequencies of answers 1 and 2 (decisive or very important), sample limited to banks that use statistical-quantitative methods. Data for the end of 2006. Frequencies weighted by volume of lending to SMEs and large firms respectively.

Table 10 Importance of scoring models in lending decisions (percentages)

SMEs Information used2 Statistical-quantitative methods Financial statement data Credit relations with entire system3 Availability of guarantees4 Qualitative information First-hand information Source: Sample survey of 322 banks.
5

Large firms Small groupmember banks 27.6 85.7 86.7 51.8 33.5 15.9 Small stand-alone banks 18.9 95.2 97.2 45.7 33.4 9.6 Mutual banks 8.9 96.5 89.5 42.0 49.9 15.0 Medium-sized/ large banks 59.6 100.0 72.0 3.9 69.2 3.9 Small groupmember banks 32.9 95.2 92.5 24.4 48.8 4.7 Small stand-alone banks 0.0 98.2 98.2 33.5 61.8 2.9

Medium-sized/ large banks 70.2 95.2 82.6 28.3 35.8 16.3

1 - Sum of frequencies of responses indicating each source as one of the two most important, weighted by volume of lending to SMEs and large firms, respectively. Data for the end of 2006. 2 - The banks were asked: For the granting of loans to non-financial firms that apply to you for the first time, please rank in decreasing order of importance the factors used in deciding whether or not to grant the loan. 1 for the most important, 2 for the next most important, and so on. Two different factors cannot be given the same rank. If you do not use any particular factor, answer NA. 3 - Source: Central Credit Register or other credit bureau. 4 - Interbank register of bad checks and payment cards, Protest bulletin, etc. 5 - Codifiable data, as through special questionnaires, on organization of the firm, characteristics of the project to fund, and so on.

156

Table 11 Importance of factors in assessing creditworthiness of new loan applicant1 (percentages)

The Capco Institute Journal of Financial Transformation


The Organization of Lending and the Use of Credit Scoring Techniques in Italian Banks

The information used in deciding on loan applications


Technological change has affected the phase of selection of borrowers, making it easier to build quantitative models that can sum up the data on potential borrowers. Credit scoring techniques can be used to enhance the information from other sources that banks ordinarily use in screening borrowers, these techniques can even replace the other scores and become the main means of evaluation. One of our questions was what importance the bank assigned to the various sources of information it used when deciding whether or not to grant a loan to a first-time applicant; the output of the statistical model is just one such source and is not always the most important. The results (Table 11) are in line with expectations. For loans to SMEs, scoring methods are assigned high importance more frequently by larger banks and less frequently by smaller ones, whereas qualitative information is emphasized more commonly by mutual banks. In selecting large corporate borrowers and not SMEs, the statistical models are less important and qualitative information is more so. Finally, small and mutual banks tend to assign considerable weight to the loan applicants credit relations with the entire system, a tendency that is accentuated when the potential borrower is a larger firm.

References

Conclusion
Since the 1990s, two significant drivers of change have been affecting the Italian banking system: liberalization and the widespread adoption of information and communication technologies. The two forces among others generated an intense M&A activity and an extensive evolution of the organization of financial intermediaries. This paper focuses on the latter topic. According to the recent banking literature organizational issues do affect lending activity, especially in the case of SMEs, by influencing the choice of the lending technology and by shaping banks propensity and ability to deal with different types of information. As a result of an ad-hoc survey, we find that Italian banks are very heterogeneous in organizational choices. Size explains only to a partial extent such differences. Heterogeneity involves: the distance between a banks headquarter and its branches network; the decision-making autonomy of the branch managers; the length of their tenure, and the incentives they face. A special consideration has been devoted to the adoption and use of scoring/rating systems, boosted by the evolution of Capital Adequacy Accords. The impact of these statistical models differs widely among banks considering the type of information used to produce automatic scores and the relevance of their role in credit decisions, such as granting credit, setting interest rates, and collateral requirements. All in all, our paper suggests that analyzing lending activity requires a taxonomy that is broader than the traditional one based on size only.

Acemoglu, D., P. Aghion, C. Lelarge, J. Van Reenen and F. Zilibotti, 2006, Technology, information and the decentralization of the firm, mimeo Aghion, P. and J. Tirole, 1997, Formal and real authority in organizations,, Journal of Political Economy, 105, 1-29 Allen, L., G. DeLong, and A. Saunders, 2004, Issues in the credit risk modelling of retail markets, Journal of Banking and Finance, 28: 727-752 Bank for International Settlements, 2005, Studies on the validation of internal rating systems, Basel Committee on Banking Supervision, Working Paper, 14, May Berger, A. N., 2003, The economic effects of technological progress: evidence from the banking industry, Journal of Money Credit and Banking, 35, 141-176 Berger, A. N. and R. DeYoung, 2001, The effects of geographic expansion on bank efficiency, Journal of Financial Services Research, 19, 163-84 Berger, A. N., W. S. Frame, and N. H. Miller, 2005, Credit scoring and the availability, price, and risk of small business credit, Journal of Money, Credit and Banking, 191-222 Berger, A. N., N. H. Miller, M. A. Petersen, R. G. Rajan, and J. C. Stein, 2005, Does function follow organizational form? Evidence from the lending practices of large and small banks, Journal of Financial Economics, 76, 237-269 Bofondi, M. and F. Lotti, 2005, Innovation in the retail banking industry: the diffusion of credit scoring, Review of Industrial Organization, 28:4, 343-358 Bonaccorsi di Patti, E., G. Eramo, and G. Gobbi, 2005, Piccole e grandi banche nel mercato del credito in Italia, Banca Impresa Societ, XXIV:1, 3-34 Cannari, L., M. Pagnini, and P. Rossi, (eds.), 2010, Banks, local credit markets and credit supply, collection of the papers presented at the Conference held in Milano on 24 March Christie, A., M. P. Joyeb, and R. L. Watts, 2003, Decentralization of the firm: theory and evidence, Journal of Corporate Finance, 9, 336 Cowan, C. D. and A. M. Cowan, 2006, A survey based approach of financial institution use of credit scoring for small business lending, Working Paper, Office of Advocacy, United States Small Business Administration, SBAH-04-Q-0021 Degryse, H. and S. Ongena, 2004, The impact of technology and regulation on the geographical scope of banking, Oxford Review of Economic Policy, 20, 571-590 Deng, S. and E. Elyasiani, 2005, Geographic diversification, distance, BHC return and risk, mimeo Felici, R. and M. Pagnini, 2008, Distance, bank heterogeneity and entry in local banking markets, Journal of Industrial Economics, 56, 500-534 Ferri, G., 1997, Mobilit dei dirigenti ed efficienza allocativa: banche locali e nazionali, Quaderno di Moneta e Credito, 245-265 Frame, W. S., A. Srinivasan, and L. Woosley, 2001, The effect of credit scoring on small business lending, Journal of Money Credit and Banking, 33, 813-825 Hand, D. J. and L. C. Thomas, 2005, A survey of the issues in consumer credit modelling research, Journal of the Operational Research Society, 56, 1006-1015 Hand, D. J. and F. Zhou, 2009, Evaluating models for classifying customers in retail banking collections, Journal of the Operational Research Society, 61, 1540-1547 Hertzberg, A., J. M. Liberti, and D. Paravisini, 2007, Information and incentives inside the firm: evidence from loan officer rotation, mimeo Jankowitsch, R., S. Pichler, and W. S. A. Schwaiger, 2007, Modelling the economic value of credit rating systems, Journal of Banking and Finance, 31:1, 181-198 Liberti, J. M., 2005, How does organizational form matter? Distance, communication and soft information, mimeo Liberti, J. M. and A. Mian, 2006, Estimating the effect of hierarchies on information use, mimeo Milgrom, P. and J. Roberts, 1992, Economics, organization and management, Prentice Hall, NJ Scott, J., 2006, Loan officer turnover and credit availability for small firms, Journal of Small Business Management, 44:4, 544562 Stein, J. C., 2002, Information production and capital allocation: decentralized versus hierarchical firms, Journal of Finance, 57, 1891-1921 Uchida, H., G. F. Udell, and N. Yamori, 2006, Loan officers and relationship lending, mimeo

157

PART 2

Measuring the Economic Gains of Mergers and Acquisitions: Is it Time for a Change?
Antonios Antoniou FRT-C Consulting Philippe Arbour Director, Lloyds Bank Corporate Markets Acquisition Finance Huainan Zhao Associate Professor of Finance, University of Nottingham1

Abstract
In this paper we review the methods of measuring the economic gains of mergers and acquisitions (M&A). We show that the widely employed event study methodology, whether for short or long event windows, has failed to provide meaningful insight and usable lessons regarding the central question of whether mergers and acquisitions create value. We believe the right way to assess the success and therefore the desirability of M&A is through a thorough analysis of company fundamentals. This will require examining smaller samples of transactions with similar characteristics.
1 The views expressed in this article are those of the authors and not representative of the views of the authors affiliations.

159

The development of the market for mergers and acquisitions (M&A) has gone hand in hand with the emergence of world capital markets. The corporate landscape is perpetually being altered by M&A transactions. On a macroeconomic level, mergers come in waves, with one of the most memorable waves gaining momentum during the early 1990s and crashing shortly after the turn of the millennium. During this period, deregulation, a booming world economy, strong consumer and investor confidence, combined with rich stock market valuations propelled the economic significance of M&A to new heights. Indeed, during the late 1990s, the size, volume, and frequency of M&A transactions surpassed anything the world had ever seen. On a microeconomic level, mergers represent massive asset reallocations within and across industries, often enabling firms to double in size in a matter of months. Because mergers tend to occur in waves and cluster by industry, it is easily understood that such transactions may radically and swiftly change the competitive architecture of affected industries. It should, therefore, come as no surprise that academics have been so intrigued by the merger debate in recent years. Examining the economic gains (value creation or destruction) of M&A is one of the most coveted research areas in financial economics. The spectacular growth of mergers has justifiably prompted many academics and practitioners to investigate whether such milestone transactions are worth undertaking. More specifically, researchers have sought to find out whether M&A create or destroy value and how the potential gains or losses are distributed among transaction participants. If synergies truly exist between bidders and their targets, M&A should have the potential of representing value-creating corporate events. This question is of utmost importance as its corresponding answer carries important policy implications for regulators and investors. Furthermore, it is vital to assess the aftermath of these colossal transactions, as lessons learned may benefit not only the corporate world, but also the society at large.

of gold-standard status by some financial economists, for tackling the question of whether M&A result in economic gains on an ex-post basis. We emphasize that this article does not constitute an attack on the event study in general, but rather an objection to the use of event studies as the main academic investigative tool in assessing whether M&A represent value-creating corporate events.

Short-window event study


From a short-run perspective, the most commonly studied event window encompasses the three or five days surrounding a merger announcement. From a theoretical standpoint, and in the context of an efficient market, changes in stock market valuations around merger announcements should fully capture the economic gains (or losses) from merging. Following this argument, it is often argued that the abnormal returns measured during the short-run event window (if any), represent reliable predictors of the success (or failure) of the M&A transactions under evaluation. The event study literature is unanimous in stating that target firm shareholders enjoy a significant positive cumulative abnormal return (CAR) around merger announcements. This finding, however, should be expected and merely represents a statement of the obvious. Intuitively, target firm shareholders expect to receive a premium if they are to hand over their ownership stakes to the acquiring firm and/or if the bidding firm is hoping, via the attractiveness of its bid, to persuade the target firms board of directors to issue a public statement in recommendation of the offer. It should consequently come as no surprise that positive CARs accrue to target firm shareholders during the period surrounding merger announcements. Provided the acquisition involves a target that is believed to remain a going concern, the CARs earned by the target firm shareholders will invariably be positive as long as a positive premium is offered, irrespective of the identity of the bidder and perhaps even of the synergy potential between the bidder and its target. The effect of takeover announcements on the acquiring firms share pric-

Although a plethora of research in financial economics has sought to address the issue of M&A value creation generally, the investigation of how value is created (or destroyed) and the examination of the question from a company fundamentals standpoint has largely been ignored. The bulk of the existing literature employs event study methodology as introduced and popularized by Fama et al. (1969), which examines what impact, if any, mergers and acquisitions have on stock prices. In accordance with this methodology, a merger is branded successful if the combined entity equity returns equal or exceed those predicted by some standard benchmark model. For reasons argued below, this simplistic approach too often leads to a Type II error (i.e., the null hypothesis is not rejected when in fact it is false and should be rejected) with respect to the null hypothesis that M&A are value-creating transactions. We invite readers to review the evidence and arguments presented in this article and to 160 judge whether the event study is an appropriate tool, let alone worthy

es is far from clear. On the one hand, some short-window event studies have found that no or small significant positive abnormal returns accrue to acquiring firm shareholders around merger announcements [Dodd and Ruback (1977); Asquith (1983); Asquith et al. (1983); Dennis and McConnell (1986); Bradley et al. (1988); Franks and Harris (1989)]. On the other hand, others have reported that acquirers experience significant but small negative abnormal returns over the same period [Firth (1980); Dodd (1980); Sudarsanam et al. (1996); Draper and Paudyal (1999)]. In short, the general picture that emerges is that, from a short-window shareholder returns standpoint, M&A are clearly more beneficial to target firm shareholders than to their respective suitors, a fact that is widely acknowledged in the literature. This finding is not particularly encouraging, however, as it is, after all, the acquiring firm which makes the investment and ultimately continues on its journey.

The Capco Institute Journal of Financial Transformation


Measuring the Economic Gains of Mergers and Acquisitions: Is it Time for a Change?

Using a weighted-average approach based on firm size as measured by market capitalization allows for assessing how combined (target and acquirer) stock returns fare in the same event window. Many believe that this type of study enables us to evaluate the net aggregate economic impact from mergers and acquisitions. Indeed, some financial economists contend that this type of analysis enables us to assess whether or not M&A result in real economic gains or whether these transactions simply involve a transfer of wealth from one entity to the other (i.e., a zero-sum game). The literature broadly concludes that the combined entity earns a small, albeit positive CAR around the merger announcement [Bradley et al. (1988); Mulherin and Boone (2000); Andrade et al. (2001)]. But are conclusions from such studies sufficient to draw high level inferences about the true value creation potential or desirability of M&A? Many believe so. Andrade et al. (2001) refer to short-window event studies as: The most statistically reliable evidence on whether mergers create value for shareholders. They go on to conclude that: [b]ased on the announcement-period stock market response, we conclude that mergers create value on behalf of the shareholders of the combined firms. This, we argue, is surely a premature conclusion as the hefty premiums paid (which could turn out to be overpayments especially under agency or hubris motives) blur the picture of whether M&A truly represent beneficial corporate events. More specifically, the premiums offered to target firm shareholders distort or bias weighted-average return calculations. Let us now explain in more detail why we believe that short-window event studies demonstrate very little with respect to the M&A value creation issue. First, examining stock price movements around the merger announcement tells us little about the sources of economic gains that arise from combining the target and the acquirer (which is of course a high priority question for M&A practitioners). Shelton (1988) writes: value is created when the assets are used more effectively by the combined entity than by the target and the bidder separately. Short-window event studies do nothing to test this statement. In fact, short-window event studies and their associated conclusions rely on strict assumptions in respect of market efficiency. However, it is possible that investors systematically over- or underreact to merger announcements, which could result in stock prices temporarily deviating from their fundamental levels. If this is accepted to be possible, the event studys ability to distinguish real economic gains from market inefficiency is compromised. As Healy et al. (1992) put it: [f]rom a stock price perspective, the anticipation of real economic gains is observationally equivalent to market mispricing. Indeed, the mounting body of behavioral finance literature illustrates the need to approach short-run event study results with skepticism. Furthermore, there is plenty of evidence which suggests that stock market wealth may temporarily move independently of fundamentals. The U.S.-centric dot.com stock market bubble of the late 1990s and the more recent global post-Lehman financial meltdown are irrefutable examples of this assertion.

Shojai (2009) points out that a significant level of information asymmetry exists between the bidding firm M&A teams and the management of the target firms that are in play. In the context of a bid for a listed company (either by way of a take-private or straight M&A transaction involving two publicly listed firms), bidders must contend with significant headwinds including limited access to the target firms they are trying to buy. As such, bidders will typically carry out their due diligence exercise from the outside-in (i.e., management will rely on the work of advisors, brokers, consultants, and other industry specialists in order to arrive at a reasonable guesstimate about the likely economic state of the target), which necessarily implies that bidders must take a leap of faith in completing the acquisition.2 The ability to refine synergy estimates is a function of time and access, with unrestricted access only becoming possible once the deal is done. Post-acquisition, corporate development teams will finally gain unfettered access to the targets books and to key target firm personnel, allowing acquirers to finetune their original synergy estimates and to establish targets and timescales for the realization of the synergies. Bidding firm management will then prioritize synergy work streams based on contribution potential, deliverability risk, level of operational disruption, management time, and cost. In short, bidding firm management will not have a clear picture of performance versus budgets until they have traveled down the synergy extraction path. If bidders themselves do not know precisely what they are buying until after it is too late, then is it reasonable to assume that the market should be equally capable of predicting the outcome of a particular M&A transaction in the space of a few days? If we accept the premise that markets at times struggle to be efficient, then it follows that the short-window event studys ability to reliably measure real economic gains is compromised. But even in the context of a reasonably efficient market, short-window event study results remain problematic due to the substantial premiums paid to target firm shareholders, which result in a bloated weighted-average CAR (WACAR) calculation and hence a potentially spurious result. Put differently, weighted average calculations are almost guaranteed to generate a positive result upon the inclusion of target firm abnormal returns. Consequently, we believe that examining weighted-average combined-entity CARs around merger announcements does not advance the M&A value creation debate. This issue can be illustrated with a simple example. Assume that a cash tender offer is made by Bidder Inc. for the acquisition of Target Inc. One month prior to the takeover announcement, the market value (MV) of Bidder Inc. is $100,000,000, with a stock price trading at $2 per share,

Even where data room access is granted to the bidder, the problem of information asymmetry still exists due to selective disclosure of key/sensitive documents by target firm management and the vendors of the business (i.e., in the case of a major shareholder selling its block of shares).

161

while Target Inc. has a market value of $10,000,000 and a stock price of $1 per share. We employ the CAPM as our simple benchmark model and assume a risk-free rate (rf) of 2 percent and a market risk premium of 6 percent. We also make the assumption that Bidder and Target have a beta of one and two respectively. According to our selected benchmark model, the annual expected returns for the Bidder and Target firms are therefore 8 percent and 14 percent respectively,3 or 6.67 basis point (bp) and 11.67 bp respectively during a three-day window.4 Based on this information, the relative size of the target to the bidder is 10 percent, which implies that weights in calculating the weighted-average cumulative abnormal return (WACAR) for the combined entity would be Wb = 90.9 percent and Wt = 9.1 percent respectively.5 Now suppose that the acquirer announces a cash offer at $1.40 per share for the target, which represents a premium of 40 percent per share purchased. In a relatively efficient market, the price of the targets share price will adjust to the offer price quickly. Hence, the market price of Target Inc.s shares should shoot up to the $1.40 range in the three days surrounding the announcement. This necessarily implies that the three-day CAR for the target would be just shy of 40 percent.6 Because the target in this example is small relative to the acquirer, it is not unreasonable to assume that the acquirer should earn the expected rate of return in the three days surrounding the announcement.7 Thus, on a weighted-average net aggregate basis, the combined entitys CAR (WACAR) would be around 3.63 percent.8 According to short-window event study proponents, there is no question that the acquisition in this example would be branded as having been value-creating and therefore desirable. Changing the example slightly, assume that Bidder Inc.s shareholders do not share the same optimism regarding the union because they believe that their management is paying too much to acquire Target Inc., or because they fear that the acquisition signals the beginning of a buying spree by the bidder. Thus, shareholders may decide to sell Bidder Inc.s shares, which may result in the acquiring firm earning a negative CAR around the merger announcement (assume CAR= -2.00 percent).9 Using the same weights and premium as described above, the WACAR for the combined entity will still be approximately 1.81 percent. That is, the negative bidder CAR of -2.00 percent corresponds to $2,000,000 of bidding firm market value destruction and yet event study proponents would continue to designate this acquisition as having been value creating. All else being equal, the acquiring firm shareholders in this example could earn a negative return as large as 3.92 percent (which corresponds to a market value loss of $3,920,000) and the acquisition would still have been considered a success owing to the large target premium which supports the positive result of the weighted-average calculation.11 This illustrates what we call the premium exacerbation problem. The practical interpretation of this example, however, is that the acquiring firm shareholders of the going concern entity (the acquirer) have suffered a -3.92 percent loss 162 in value in the three days surrounding the merger announcement (-470

percent annualized)12 which compares to a benchmark annual expected return of 8 percent for the bidder and despite this, event study proponents would have the audacity to claim that this acquisition has been value creating; such a conclusion is clearly misguided and is illustrative of the reasons why decades of academic research into M&A based on the event study methodology have failed to make it anywhere near corporate boardrooms [Shojai (2009)]. In reality, Moeller et al. (2004) report that the mean premium paid for over 12,000 U.S. takeover transactions with announcement dates between 1980 and 2001 was 68 percent for large firms and 62 percent for small firms, which necessarily implies that, according to our illustration, most (if not all) M&A transactions evaluated using a short-window event study approach will generate a positive weighted-average CAR and will therefore be branded as being successful, or value creating, in spite of the potentially significantly negative returns accruing to acquiring firms during the same period. The WACAR is almost invariably positive, and the problem is compounded when the relative size of the target to the acquirer increases. Indeed, premiums offered may easily represent overpayments [Roll (1986)]. During times of high M&A activity, firms believed to be potential takeover targets are likely to carry a substantial takeover premium as part of their market capitalization. Hence, during M&A waves, acquiring firms can end up paying a premium on top of what may already be an overvalued target share price. Ironically, the WACAR approach to evaluating M&A tends to reward overpayments, which invariably leads to the conclusion that M&A are value-creating events. Mitchell et al. (2004) examine price pressure effects around merger announcements and find that on average, acquirers earn a significant
Bidder expected return based on CAPM: E(Rb) = rf + b (Rm rf) = 2% + 1(6%) = 8%. Target return calculated on the same basis although beta = 2. 4 The three-day window expected return for the bidder is: (3/360)*8% = 6.67bp; Target: (3/360)*14% = 11.67bp. For simplicity, we do not adjust the beta estimates for the shortwindow returns calculation. 5 Weights are determined as follow: Wt = MVt /(MVt+MVb) = Wt = 10,000,000 /(110,000,000) = 9.09%, and Wb = 1Wt = 90.91%. 6 CAR = actual return expected return = 40.00% -0.12% = 39.88%. 11.67bp is rounded to 0.12 percent for simplicity. 7 Prior research shows that on average acquirers break even in the few days surrounding merger announcements. See, for instance, Asquith (1983), Dennis and McMconnell (1986), Bradley et al. (1988), Andrade et al. (2001), etc. 8 WACAR = Wt (CARt) + Wb (CARb) = (9.09%)*(39.88%) + (90.91%)*(0%) = 3.63%. 9 In three-day window, Andrade et al. (2001) find that the abnormal return is approximately 0 percent for bidders, regardless of the benchmark model used. An abnormal return of 2 percent therefore represents an exaggerated estimate to prove our point. 10 WACAR = Wt (CARt) + Wb (CARb) = (9.09%)*(39.88%) + (90.91%)*(2%) = 1.81%. Bidder market capitalization of $100,000,000 * -2% = -$ 2,000,000 of lost bidding firm market value. 11 We use the goal seek function in Excel to solve for the bidder return (-3.92 percent) which, holding the remainder of the example constant, will yield a WACAR of 0 percent. On a base market capitalization of $100,000,000, this level of return translates into a bidding firm market capitalization loss of $3,920,000. 12 -3.92% x (360/3) = -470.40%. 3

The Capco Institute Journal of Financial Transformation


Measuring the Economic Gains of Mergers and Acquisitions: Is it Time for a Change?

negative abnormal return of -1.2 percent in the three days surrounding the announcement. However, they find that a substantial proportion of this negative return is due to merger arbitrage short-selling, rather than information, thereby contradicting the premise that stock returns solely reflect changes in expectations regarding the present value of future cash flows. After controlling for price pressure effects, however, acquirers announcement period abnormal returns shrink to -0.47 percent and become statistically insignificant. Their findings demonstrate that conventional short-run event study results may reflect more than just investors expectations regarding the desirability of mergers taking place. Moeller et al. (2005) examine acquiring firm returns in recent merger waves. In addition to testing abnormal percentage returns, they also measure aggregate dollar returns.13 Strikingly, they find that between 1998 and 2001, acquiring firms three-day announcement period (-1; +1) average CAR is 0.69 percent, while the aggregate dollar return14 measure indicates that acquiring firm shareholders lose a total of U.S.$240 billion over the window spanning from -2 to +1 days around the merger announcement (a result which is explained by the significant dollar losses generated by some of the larger-size wealth-destroying acquisitions in the sample). Upon further investigation, they also find that the losses to acquirers exceeded the gains to targets, resulting in a net aggregate dollar loss of U.S.$134 billion during the same window. These findings provide interesting but rather painful evidence that if we merely rely on the short-run event study result (i.e., the three-day CAR 0.69 percent), we run the risk of concluding that the sample merger transactions are valuecreating, despite the massive aggregate shareholder losses experienced during the same event window.15 The latter authors also show that between 1980 and 2001, the average three-day announcement period CAR for acquirers is positive every year except for 2 out of 21 years, while the three-day aggregate dollar returns are negative for 11 out of 21 years. Once again, the three-day CARs tell us that mergers create value for acquirers in almost every year between 1980 and 2001 regardless of the massive dollar losses realized in over half of the period. If short-window event studies fail to shed clarity on the shareholder wealth effect associated with M&A, then their ability to answer the bigger-picture question of whether M&A represent desirable corporate events hardly fills us with confidence. According to short-window event studies, mergers and acquisitions are value-creating transactions. So why is there so much controversy surrounding the desirability of M&A? That is, why do event study results stand in such sharp contrast with the growing rhetoric that creating value through M&A is easier said than done?16 And why do event study results stand in such stark contrast with investor experience? It is widely acknowledged that many recent mergers have proven to be total disasters. Even consultancy firms, which derive significant income in advising companies on M&A-related matters, have documented the widespread

nature of these failures [Lajoux and Weston (1998)]. Academic studies have also exposed the high prevalence of divestitures after acquisitions [Kaplan and Weisbach (1992)]. If mergers are truly value-creating transactions due to real economic gains and not market mispricings, it is highly unlikely that acquirers would divest recent purchases at such a high frequency, nor would there be so much controversy surrounding the desirability of M&A generally. Hitherto, we have shown that it may be nave to conclude that M&A are value-creating transactions based solely on the prevalence of positive CARs or even weighted-average abnormal returns around merger announcements. In many cases, target firm shareholders may be the only ones who gain anything from the transactions, and possibly to the detriment of acquiring firms. But in retrospect, examining how target shareholders fared around the bid announcement has very little relevance to the questions being asked by M&A practitioners. Undeniably, at the merger announcement and the few days surrounding it, we know very little about any future possible negative drift in the acquirers share price, or whether acquiring firm managers will succeed at unlocking synergies. We ought to be much more concerned about the firm that makes the investment and ultimately carries on: the acquiring firm.

Long-window event study


A second strand of the literature examines the long-run post-merger stock performance of the acquirer and its absorbed target, or the combined entity. In general, this strand of the literature converges on the notion that acquiring firms underperform their benchmark returns in the post-merger period this is often referred to in the literature as the longrun merger underperformance anomaly. Some researchers have asked whether overpayments could be responsible for the long-run negative drift in share price after acquisitions.17 In one study [Antoniou et al. (2008a)], however, the authors were unable to establish such a relationship statistically. Although we concur that it makes more sense to focus on bidder firm results and bidder shareholder returns to ascertain the desirability of M&A on an ex-post basis, we argue that long-run event

13 Malatesta (1983) argues that the widely used percentage abnormal returns do not capture the real wealth changes of acquiring firm shareholders. However, dollar returns capture the wealth change of acquiring firm shareholders. 14 Moeller et al. (2005) define aggregate dollar returns as the sum of the acquisition dollar returns (change in market capitalization from day 2 to day +1) divided by the sum of the market capitalization of acquiring firms two days before the acquisition announcements, in 2001 dollars. 15 The sharp contrast between the CAR and the shareholder wealth effect signals the possibility of a Type II error (if the CAR is used) in respect of the null hypothesis that M&A are value-creating corporate events. 16 Evidence suggests that the majority of acquisitions do not benefit shareholders in the long term. Valuations and premiums tend to be excessively high and targets impossible to achieve. Financial Times 2004. 17 Schwert (2003) states: One interpretation of this evidence (post merger underperformance) is that bidders overpay and that it takes the market some time to gradually learn about this mistake.

163

studies also fail to provide interesting insights due to the following shortcomings. First and foremost is the methodological problem associated with longrun event studies. For instance, bad model problems imply that it is simply not possible to accurately forecast or measure expected returns, thus rendering futile the analysis of long-run abnormal returns, particularly as the event window lengthens. In addition to the bad model problem, a number of researchers have pointed out that the process used in calculating and testing long-run abnormal returns is in itself biased. For example, Barber and Lyon (1997) and Kothari and Warner (1997) address misspecification problems in long-horizon event studies. They argue that commonly employed methods for testing for long-run abnormal returns yield misspecified test statistics, and invite researchers to use extreme caution in interpreting long-horizon test results. In order to mitigate bad model problems and biased test statistics, Barber and Lyon (1997) and Lyon et al. (1999) advocate the use of a single control firm or a carefully-constructed reference portfolio approach. The idea is to select a matching firm or a portfolio of matching firms that have approximately the same size (MV ) and book-to-market ratio (BE/ME )
18 19

method, the offer price for the target is compared to the fair market value (FMV) of the net assets of the target, with the difference being capitalized on the balance sheet in the form of Goodwill,24 which was previously required to be amortized over a period not exceeding 40 years (20 years under IAS GAAP), resulting in a lower level of reported earnings under this method.25 Under the pooling method, however, operating results and prior-period accounts (i.e., assets and liabilities) were simply added together at their stated book values (irrespective of whether different accounting methods were historically used by the bidder and its target26), with the concepts of FMV and Goodwill playing no part in the process. Under the purchase method, the acquirers post-merger equity book value27 will remain equal to its original book equity, unless new equity is issued to finance the acquisition.28 Under the pooling method, however, the book value of equity will be equal to the sum of the historical book values of the acquirer and its target. In summary, if an acquisition is financed with equity, the purchase method should usually result in a higher book equity value because the offer price for the target will typically exceed the FMV of net assets of the target. But even under the same method (i.e., purchase), the resulting book equity account for the combined firm will vary depending on whether the acquisition is financed with cash or with stock.29 Matters are further complicated by the timing issue for the application of each method. The pooling method consolidates target and acquiring

as the sample firms. This approach has been shown to eliminate some well-known biases and results in better test statistics. remains problematic for the following reason. Until the adoption of the Statement of Financial Accounting Standard (SFAS) 141 in 2001 in the U.S. (i.e., shortly after the crash of the important M&A wave of the late 1990s), mergers were accounted for using either the purchase or the pooling of interests (pooling) methods.21 At least conceptually, the use of the pooling method was meant to be reserved for transactions involving the merger of equals. Due to a favorable impact on reported earnings (rather than economic earnings), however, acquirers began manipulating M&A transactions in order to qualify for the use of the pooling method, partly based on the belief that the higher level of reported earnings would result in higher stock market valuations,22 or in certain cases, to obfuscate the quarter-by-quarter comparability of results.23 As such, the choice of permissible merger accounting methods would eventually become the subject of widespread controversy and regulatory scrutiny, culminating in the eventual elimination of the pooling method under both U.S. Generally Accepted Accounting Principles (GAAP) and eventually International Financial Reporting Standards (IFRS). But the very fact that acquirers previously enjoyed discretion over their choice of merger accounting method compromises the ability of financial economists to select relevant and unbiased control firms based on BE/ ME as advocated in Lyon et al. (1999). For instance, the two account164 ing techniques are fundamentally different in that under the purchase
20

Although some

recent M&A studies have applied this updated approach (listed below), it

18 Market value of equity. 19 Book value of equity to market value of equity. 20 Barber and Lyon (1997) identify three biases: the new listing bias, the rebalancing bias, and the skewness bias. 21 Note that the pooling method was also previously allowed under International Accounting Standards (IAS) 22, although IAS 22 was superseded by IFRS 3 in 2004, which no longer allows the pooling method. 22 The choice of accounting method should not impact cash flow. See, for example, Hong et al. (1978) and Davis (1990). 23 The pooling method requires the restatement of historical financial accounts (in some cases well before the closing of the acquisition), as if the bidder and its target had been one firm all along. This makes deciphering the performance of each individual business very difficult. In the late 1990s, Tyco International Ltd., for instance, came under fire after pursuing a series of acquisitions which were accounted for under the pooling method, making it nearly impossible to compare one quarter to the next. Businessweek 2001. 24 Under the purchase method, a positive differential between the offer price and the FMV of the targets net assets would first be allocated to identifiable intangibles (i.e., licenses, in-process research and development, or patents), with the balance allocated to Goodwill. 25 As a compromise, when the pooling method was eventually abolished under U.S. GAAP, the Financial Accounting Standards Board (FASB) issued statement no. 142, which specified that Goodwill would no longer be allowed to be amortized but rather would become the subject of an annual impairment test. 26 For instance, U.S. GAAP allows both last-in-first-out (LIFO) and first-in-first-out (FIFO) inventory accounting methods. 27 Equity book value is defined as the sum of shareholders equity and retained earnings. 28 Under the U.S. GAAP variant, in-process research and development is required to be written off as part of the transaction, thereby reducing the retained earnings of the acquirer and impacting book value of equity. 29 Book equity values should generally be larger for stock-financed transactions (i.e., where an equity issuance has taken place) relative to acquisitions that are paid for in cash (i.e., financed either by a debt issuance or cash on balance sheet).

The Capco Institute Journal of Financial Transformation


Measuring the Economic Gains of Mergers and Acquisitions: Is it Time for a Change?

firm accounts from the beginning of the year (historical bidder and target results are also added together), regardless of when the merger is actually completed. However, under the purchase method, target and acquiring firm results are only added together from the date of the transaction completion onwards. The picture that emerges is that one cannot bundle into the same sample deals that were accounted for under different merger accounting methods while continuing to expect a justified benchmark return and therefore an unbiased test result. Unless the aforementioned factors are controlled for in the research design, the selection of control firms and therefore the calculation of expected returns becomes tricky and probably flawed. This, in turn, compromises the calculation of long-run abnormal returns, as well as the cross-sectional comparability of results. Although merger accounting intricacies go beyond the scope of this article, it is easily understood that such issues must be carefully analyzed when applying popular long-run event study methodology. The literature has largely failed to control for these key issues. The above-mentioned merger accounting discussion significantly weakens the use of the so-called state-of-the-art bootstrap approach ad30

negative and becomes more negative as the event window is lengthened. The implication of utmost importance here is that these negative results do not discriminate between successful or unsuccessful transactions, suggesting that the so-called long-run M&A underperformance anomaly may not be anomalous at all. In addition, Viswanathan and Wei go on to examine the above problem in infinite samples. They prove that, asymptotically, the event abnormal return converges to zero and hence they conclude that the negative long-run event abnormal return is simply a small sample problem. This again offers a reasonable explanation as to why some of the larger M&A studies have reported insignificant results. If the small sample problem is the long-run event studys only flaw, then one can probably get around this issue by increasing the sample size. But what would such studies contribute to our understanding of M&A? By averaging abnormal returns across a very large number of cross-sectional observations, we end up with what might resemble a near-normal distribution of abnormal returns with a mean of zero. Consequently, nothing can be concluded from this result apart from the fact M&A have a 50/50 probability of creating or destroying value, and can therefore be likened to a crapshoot. Surely, this type of conclusion is of no use to regulators, practitioners, or investors. Finally, the recent development of a series of new methodologies has given rise to a new wave of long-run event studies.33 Mitchell and Stafford (2000), for instance, reexamine the long-run anomaly associated with corporate takeovers.34 Their results suggest that acquirers (combined firms) earn the expected rate of return over the long run, thereby implying that mergers do not create nor destroy value, an idea which is consistent with that found in the previous paragraph.

vocated by Ikenberry et al. (1995), Kothari and Warner (1997), and Lyon et al. (1999) and applied by Rau and Vermaelen (1998) in mergers and acquisitions. Indeed, the 1000 pseudo-portfolios matched in size and book-to-market ratio at the time of merger completion do not control for the aforementioned accounting issues, thereby calling into question the validity of obtained matches and thus the empirical distribution of abnormal returns generated under the approach. But even if we control for merger accounting differences and all other possible sources of misspecification,31 we are still far from obtaining an accurate and reliable long-run test result. In one attempt, Lyon et al. (1999) recommend two general approaches that control for common misspecification problems in long-run event studies. Despite the authors positive intentions, however, their simulated results confirm that well-specified test statistics (i.e., where empirical rejection levels are consistent with theoretical rejection levels) are only guaranteed in random samples, while misspecified test statistics are pervasive in non-random samples. We also know that mergers and acquisitions cluster in time and by industry, which necessarily implies that well-specified test statistics in long-run M&A event studies should hardly exist [Antoniou et al. (2008b)]. The central message in their study, however, is that: the analysis of longrun abnormal returns is treacherous. Second, on a more general framework, Viswanathan and Wei (2004) provide a mathematical proof that the usual abnormal return (CAR/BHAR32) calculated in event studies has a negative expectation. They prove that, in any finite sample, the expected event abnormal return will invariably be
30 As noted in Ikenberry et al. (1995), the bootstrap approach avoids many problematic assumptions associated with conventional t-tests over long time horizons, namely normality, stationarity, and the time independence of sample observations. 31 Lyon et al. (1999) document that the misspecification of test statistics can generally be traced to five sources: the new listing bias, the rebalancing bias, the skewness bias, crosssectional dependence, and bad models problems. 32 Buy and hold abnormal returns (BHARs). 33 For these new methodologies, refer to Barber and Lyon (1997), Lyon et al. (1999), Brav (2000), and Mitchell and Stafford (2000). 34 For these long-run event studies, see, for example, Mitchell and Stafford (2000), Brav et al. (2000), Eckbo et al. (2000), and Boehme and Sorescu (2002).

But even if we were able to overcome all the methodological problems associated with long-window event studies, what level of insight could be gained from such a perfect study? We know that stock price is forward looking and that, in a relatively efficient market, the price of an asset should reflect expectations regarding the underlying assets future cash flows, based on information which is available today. Consequently, the returns observed under long-window event studies (particularly in the latter years of the sample), ought to be discounting anticipated events that

165

reach far beyond the merger or acquisition under analysis. For example, stock returns generated five years after an M&A transaction (t+5 years) should discount what is expected to happen in periods t+6, t+7, etc. But this extends so far beyond the actual transaction that occurred in year t that we fail to see the relevance of this analysis. We refer to this as the long-window forward expectations trap. Adding to event studys general malaise, confounding events (whether exogenous or endogenous) may further distort the inferences from long-run stock returns. All in all, longterm event studies fail to provide a means of identifying and isolating the effects of the actual merger that has previously taken place and thus do not provide much relevant insight about the micro or macroeconomic impact of M&A.

of acquiring firms. In short, a value-creating transaction is one that is associated with some measurable improvement (a relative improvement at a minimum) in company fundamentals. Furthermore, if purported synergies are real, and cash flows do improve, we should be able to identify the sources of any such real economic gains. Managers undertaking M&A for synergistic reasons rather than for hubris-related motives must have identified possible sources of economic gains before proceeding with the transaction. These types of studies are more conducive to performance evaluation on an ex-post basis in our view and they are also more likely to contain information that can be used and applied by business school students and M&A practitioners generally. In a 1992 journal article, Healy et al. find that the 50 largest mergers

In light of the arguments presented thus far, we believe that event study methodology, for both short and long event windows, falls short of offering an economically sound tool for measuring merger performance on an ex-post basis. Ironically, it is the assumption regarding market efficiency that is the downfall of both long-term and short-term event studies, but in different ways: in the case of the former, a reasonable degree of market efficiency should imply that long-run stock returns are probably irrelevant to the analysis of individual events at a fixed point in time, while in the case of the latter, markets are insufficiently efficient to reliably predict the outcome of a particular M&A transaction. In our view, financial economists have overindulged in event studies, which have largely yielded results which are biased, unreliable, and lacking insight. Although we can appreciate the merits of tackling the M&A value-creation debate from an investor experience standpoint,35 we believe the debate is best served when approached from a different perspective: that of company fundamentals.

between U.S. public industrial firms completed between 1979-1984 experienced higher operating cash flow returns, mainly due to post-merger improvements in asset productivity. They also report that such improvements do not come at the expense of cutbacks in capital expenditure or research and development (R&D) spending, thereby undermining the claim that the improvement in post-merger cash flows is achieved at the expense of the acquiring firms long-run viability/competitiveness. These results are similar to Kaplan (1989), although the latter author finds that merged firms reduce capital expenditure in the three years following the merger. In short, both studies indicate that merging was probably a desirable course of action, as evidenced by the fact that acquiring firms appeared to enjoy better economic strength relative to peers during the post-acquisition period. This point illustrates the importance of using carefully-selected, industry-adjusted benchmarks in interpreting M&A study results. If the aforementioned results are pervasive on a time series and cross-sectional basis, then we feel more comfortable with the blanket statement that M&A are generally desirable corporate events. However, a large literature gap remains to be filled in this particular area of research. But there is no Holy Grail solution to this debate. Healy et al.s analysis and many of its kind also suffer from their own methodological problems stemming from complexities in analyzing and interpreting financial statements and accounting data generally. As highlighted earlier, the existence of different permissible merger accounting methods renders time series and cross-sectional comparisons difficult. For instance, in their raw form, some pre- and post-merger accounting results and ratios, particularly those involving both P&L and balance sheet accounts,36 are simply not comparable under the purchase method. This is because the purchase method consolidates bidder and target firm accounts from the acquisition

Fundamental analysis
Unlocking synergies is the most commonly cited managerial motivation for undertaking M&A [Walter and Barney (1990)]. If synergies truly exist, economic gains from mergers should thus show up in the combined firms fundamentals. Coming back to Sheltons definition of value creation, it is clear that the concept has less to do with share price movements (at least in terms of aetiology), and more to do with asset reorganizations, and improvements in a number of financial performance metrics and key performance indicators relevant to the firm/sector under analysis. Jarrell et al. (1988) postulate that gains to shareholders must be real economic gains via the efficient rearrangement of resources. Consistent with basic finance principles, improvements in company fundamentals should drive capital gains. As such, we believe that the analysis of the desirability of M&A begins not with stock returns (the symptom), but rather with the underlying factors/fundamentals which drive cash flows (the cause), which in turn power shareholder returns. In addition to short- and long-window event studies, there is a small body 166 of literature that examines pre- and post-merger operational performance

35 Unlike CARs, Buy and hold abnormal returns (BHARs) provide a reasonable measure of investor experience [Barber and Lyon (1997)]. 36 Goodwill, for example, should be excluded for better comparability of results across time periods and across sample firms.

The Capco Institute Journal of Financial Transformation


Measuring the Economic Gains of Mergers and Acquisitions: Is it Time for a Change?

date onwards, and prior accounts are not restated. Conversely, particular in the case of serial acquirers adopting the pooling method, it is nearly impossible to isolate the like-for-like performance of component firms due to the restatement of historical accounts by the acquirer after each new acquisition. Without making the relevant adjustments to the data, a research design which involves performing ratio analysis to compare preand post-merger results must therefore be applied with care. Another problem arises in the computation of meaningful industry averages due to the fact that firms operate across multiple industries and geographies, which obscures the process of identifying relevant pure-play benchmarks for calculating industry-adjusted results and ratios. The selected benchmark should ideally be limited to relevant comparable firms that have chosen not to merge. However, we know and recognize that mergers come in waves and cluster by industry, which poses a research design challenge given the currently statistically-focused foundation of academic research into M&A.
37

In the wake of multiple merger waves that appear to be growing in strength and size every time the tide comes in, financial economists and finance students can no longer afford to use research methods that do little more than reinforce the obsolete work of their academic peers, whilst continuing to completely sidestep the key questions being asked by investors, regulators, and M&A practitioners. Advancing the M&A value creation debate has never been more critical. Since the first version of this article in 2004, we have argued that new methods are very much needed to advance our knowledge on this vital issue and we continue to believe that fundamentals-based approaches represent more promising avenues for new and future research. Despite the flexibility offered by GAAP, accounting data (when analyzed with the appropriate level of skill and competence) remains the best proxy of company economic performance available to investors, analysts, and academics alike. Encouragingly, we expect for the divide between IFRS and U.S. GAAP to continue to narrow over time, in line with the globalization of the investing practice and of the investor base. As such, we believe that the M&A value creation puzzle can be better understood by returning to fundamentals. If the very businesses that are studied in the academic research merely ensure their own survival by continuously reinventing themselves to successfully meet the evolving needs of consumers, while continuously facing up to the ever-lurking threats of competition, complacency, and obsolescence, then why should research into M&A not be subject to the same Darwinian forces?

If our call for new research methods is

heard and answered, we would not be surprised if this meant the end of the large-sample statistically-oriented studies in favor of smaller, almost case-study-like research that focuses on select peer groups, with analytical emphasis placed on softer elements of post-merger integration including managerial compensation, the creation of M&A oversight committees for combined firms, through to the robustness of 180-day postacquisition plans and heat maps of where synergies may lie between a bidder and its target. Although these studies are likely to suffer from their own methodological weaknesses, they can only represent an improvement relative to the current crop of event studies which have dismally failed to produce any meaningful and usable insights on the issue of M&A value creation.

References:

Conclusion: is it time for a change?


Since its birth in 1969, the event study has held a virtual monopoly over academias various attempts to shed light on the M&A value creation debate. We argue that the event study has stagnated in terms of its incremental insights into M&A. Among other things, the event studys shortcomings include the hefty premiums offered to target shareholders, which exacerbate the short-run weighted-average CARs for combined entities, and this in turn has misled many researchers into concluding that most M&A transactions represent value-creating events. Furthermore, we show that cumulative abnormal returns observed around merger announcements can produce poor estimates of shareholder wealth effects. We also discuss various problems inherent to long-window event studies and we conclude that such studies are also unsuitable for measuring the economic gains of M&A due to various methodological problems and the forward-looking nature of stock returns. In short, both short- and longwindow event studies provide biased results and undependable insights regarding the question of M&A value creation.

Andrade, G., M. Mitchell, and E. Stafford, 2001, New evidence and perspectives on mergers, Journal of Economic Perspectives, 15, 103-120 Antoniou, A., P. Arbour, and H. Zhao, 2008a, How much is too much: are merger premiums too high? European Financial Management 14, 268-287 Antoniou, A., P. Arbour, and H. Zhao, 2008b, The effects of cross-correlation of stock returns on post merger stock performance, ICFAI Journal of Mergers and Acquisitions, 5, 7-29 Asquith, P., 1983, Merger bids, uncertainty, and stockholder returns, Journal of Financial Economics, 11, 51-83 Asquith, P., R. F. Bruner, and D. W. Mullins, Jr., 1983, The gains to bidding firms from merger, Journal of Financial Economics, 11, 121-139 Barber, B. M. and J. D. Lyon, 1997, Detecting long-run abnormal stock returns: the empirical power and specification of test statistics, Journal of Financial Economics, 43, 341-372 Boehme, R. D. and S. M. Sorescu, 2002, The long-run performance following dividend initiations and resumptions: underreaction or product of chance? Journal of Finance, 57, 871-900 Bradley, M., A. Desai, and E. H. Kim, 1988, Synergistic gains from corporate acquisitions and their division between the stockholders of target and acquiring firms, Journal of Financial Economics, 21, 3-40 Brav, A., 2000, Inference in long-horizon event studies: a Bayesian approach with application to initial public offerings, Journal of Finance, 55, 1979-2016 Brav, A., C. Geczy, and P. Gompers, 2000, Is the abnormal return following equity issuances anomalies? Journal of Financial Economics, 56, 209-249 Davis, M. L., 1990, Differential market reaction to pooling and purchase methods, The Accounting Review 65, 696-709 Dennis, D. K. and J. J. McConnell, 1986, Corporate mergers and security returns, Journal of Financial Economics, 16, 143-187

37 The small-sample bias is often regarded as a shortcoming for studies in financial economics.

167

Dodd, P., 1980, Merger proposals, management discretion and stockholder wealth, Journal of Financial Economics, 8, 105-137 Dodd, P. and R. Ruback, 1977, Tender offers and stockholders returns: An empirical analysis, Journal of Financial Economics, 5, 351-373 Draper, P. and K. Paudyal, 1999, Corporate takeovers: Mode of payment, returns and trading activity, Journal of Business Finance and Accounting, 26, 521-558 Eckbo, B. E., R. Masulis, and O. Norli, 2000, Seasoned public offerings: resolution of the new issue puzzle, Journal of Financial Economics, 56, 251-291 Fama, E. F., L. Fisher, M. C. Jensen, and R. Roll, 1969, The adjustment of stock prices to new information, International Economic Review, 10, 1-21 Firth, M., 1980, Takeovers, shareholder returns, and the theory of the firm, Quarterly Journal of Economics, 94:2, 235-260 Franks, J. R. and R. S. Harris, 1989, Shareholder wealth effects of corporate takeovers: the U.K. experience 1955-1985, Journal of Financial Economics, 23, 225-249. Healy, M. H., K. G. Palepu, and R. S. Ruback, 1992, Does corporate performance improve after mergers? Journal of Financial Economics, 31, 135-175 Hong, H., R. S. Kaplan, and G. Mandelker, 1978, Pooling vs. purchase: the effects of accounting for mergers on stock prices, The Accounting Review, 53, 31-47 Ikenberry, D., J. Lakonishok, and T. Vermaelen, 1995, Market underreaction to open market share repurchases, Journal of Financial Economics, 39, 181-208 Jarrell, G. A., J. Brickley, and J. Netter, 1988, The market for corporate control: The empirical evidence since 1980, Journal of Economic Perspectives, 2, 49-68 Kaplan, S., 1989, The effects of management buyouts on the operating performance and value, Journal of Financial Economics, 24, 217-254 Kaplan, S. and M. S. Weisbach, 1992, The success of acquisitions: Evidence from divestitures, Journal of Finance, 47, 107-138 Kothari, S. P. and J. B. Warner, 1997, Measuring long-horizon security price performance, Journal of Financial Economics, 43, 301-339 Lajoux, A. R. and J. F. Weston, 1998, Do deals deliver on postmerger performance? Mergers and Acquisitions, 33:2, 34-37 Loughran, T. and A. M. Vijh, 1997, Do long-term shareholders benefit from corporate acquisitions? Journal of Finance, 52, 1765-1790 Lyon, J. D., B. M. Barber, and C. Tsai, 1999, Improved methods for tests of long-run abnormal stock returns, Journal of Finance, 54, 165-201 Malatesta, P. H., 1983, The wealth effect of merger activity and the objective functions of merging firms, Journal of Financial Economics, 11, 155-181 Mitchell, M. and E. Stafford, 2000, Managerial decisions and long-term stock price performance, Journal of Business, 73, 287-329 Mitchell, M., T. Pulvino, and E. Stafford, 2004, Price pressure around mergers, Journal of Finance 58, 31-63 Moeller, S. B., F. P. Schlingemann, and R. M. Stulz, 2004, Firm size and the gains from acquisitions, Journal of Financial Economics, 73, 201228 Moeller, S. B., F. P. Schlingemann, and R. M. Stulz, 2005, Wealth destruction on a massive scale? A study of acquiring-firm returns in the recent merger wave, Journal of Finance, 60, 757782 Mulherin, J. H. and A. L. Boone, 2000, Comparing acquisitions and divestitures, Journal of Corporate Finance, 6, 117-139 Rau, P. R. and T. Vermaelen, 1998, Glamour, value and the post-acquisition performance of acquiring firms, Journal of Financial Economics, 49, 223-253 Roll, R., 1986, The hubris hypothesis of corporate takeovers, Journal of Business, 59, 197216 Schwert, G. W., 2003, Anomalies and market efficiency, in Constantinides, G., M. Harris, and R. Stulz (eds.), Handbook of the economics of finance, North-Holland Shelton, L. M., 1988, Strategic business fit and corporate acquisitions, Strategic Management Journal, 9, 279-287 Shojai, S., 2009, Economists hubris - the case of mergers and acquisitions, Journal of Financial Transformation, 26, 4-12 Sudarsanam, S., P. Holl, and A. Salami, 1996, Shareholder wealth gains in mergers: effect of synergy and ownership structure, Journal of Business Finance and Accounting, 23, 673-698 Viswanathan, S. and B. Wei, 2004, Endogenous events and long run returns, Working Paper, Fuqua School of Business, Duke University Walter, G. A. and J. B. Barney, 1990, Management objectives in mergers and acquisitions, Strategic Management Journal, 11, 79-86

168

PART 2

Mobile Payments Go Viral: M-PESA in Kenya


Ignacio Mas Senior Advisor, Financial Services for the Poor (FSP) team, Bill and Melinda Gates
Foundation

Dan Radcliffe Program Officer, Financial Services for the Poor (FSP) team, Bill and Melinda
Gates Foundation1

Abstract
M-PESA is a small-value electronic payment and store of value system that is accessible from ordinary mobile phones. It has seen exceptional growth since its introduction by mobile phone operator Safaricom in Kenya in March 2007: it has already been adopted by 14 million customers (corresponding to 68 percent of Kenyas adult population) and processes more transactions domestically than Western Union does globally. M-PESAs market success can be interpreted as the interplay of three sets of factors: (i) preexisting country conditions that made Kenya a conducive environment for a
1 From Yes Africa can: success stories from a dynamic continent, World Bank, August 2010.

successful mobile money deployment; (ii) a clever service design that facilitated rapid adoption and early capturing of network effects; and (iii) a business execution strategy that helped M-PESA rapidly reach a critical mass of customers, thereby avoiding the adverse chicken-and-egg (two-sided market) problems that afflict new payment systems.

169

M-PESA in a nutshell2
M-PESA was developed by mobile phone operator Vodafone and launched commercially by its Kenyan affiliate Safaricom in March 2007. M-PESA (M for mobile and PESA for money in Swahili) is an electronic payment and store of value system that is accessible through mobile phones. To access the service, customers must first register at an authorized M-PESA retail outlet. They are then assigned an individual electronic money account that is linked to their phone number and accessible through a SIM card-resident application on the mobile phone.
3

M-PESA is useful as a retail payment platform because it has extensive reach into large segments of the population. Figure 1 shows the size of various retail channels in Kenya.6 Note that there are over five times the number of M-PESA outlets than the total number of PostBank branches, post offices, bank branches, and automated teller machines (ATMs) in the country. Using existing retail stores as M-PESA cash-in/cash-out outlets reduces deployment costs and provides greater convenience and lower cost of access to users.

Customers can deposit and withdraw cash to/from their accounts by exchanging cash for electronic value at a network of retail stores (often referred to as agents). These stores are paid a fee by Safaricom each time they exchange these two forms of liquidity on behalf of customers. Once customers have money in their accounts, they can use their phones to transfer funds to other M-PESA users and even to non-registered users, pay bills, and purchase mobile airtime credit. All transactions are authorized and recorded in real time using secure SMS, and are capped at around U.S.$800. Customer registration and deposits are free. Customers then pay a flat fee of around U.S. 35 for person-to-person (P2P) transfers and bill pay4

A snapshot of M-PESA after four years


M-PESA is going from strength to strength. Safaricom reached the 10 million customer mark in just over three years and is now serving 14 million customers, of which the majority are active. This corresponds to 81 percent of Safaricoms customer base and 68 percent of Kenyas adult population.7 Other key developments and figures reported by Safaricom as of April 30, 2011 are:8

28,000 retail stores at which M-PESA users can cash-in and cash-out, of which nearly half are located outside urban centers. U.S.$415 million per month in person-to-person (P2P) transfers. On an annualized basis, this is equal to roughly 17 percent of Kenyan gross domestic product (GDP).9 Although transactions per customer have been on a rising trend, they remain quite low, probably still under two P2P transactions per month.

ments, U.S. 30 for withdrawals (for transactions less than US $30), and U.S. 1.1 for balance inquiries. Individual customer accounts are maintained in a server that is owned and managed by Vodafone, but Safaricom deposits the full value of its customers balances on the system in pooled accounts in two regulated banks. Thus, Safaricom issues and manages the M-PESA accounts, but the value in the accounts is fully backed by highly liquid deposits at commercial banks. Customers are not paid interest on the balance in their M-PESA accounts. Instead, the foregone interest is paid into a not-for-profit trust fund controlled by Safaricom (the purpose of these funds has not yet been decided).

The average transaction size on P2P transfers is around U.S.$33, but Vodafone has stated that half the transactions are for a value of less than U.S.$10.

U.S.$94 million in annual revenue in FY2010. This is equal to 9 percent of Safaricom revenues.

100,000 16,900 10,000 800 840 1,510

100,000

1,000

440

100

4 5 6

10

1 PostBank branches Total post of ces Bank branches ATMs M Pesa stores Airtime resellers

7 8

170

Figure 1 Outlets offering financial services in Kenya5

For more detailed accounts of the M-PESA service, see Hughes and Lonie (2009) for a historical account, Mas and Morawczynski (2009) for a fuller description of the service, and Mas and Ngweno (2009) for the latest accomplishments of M-PESA. The Subscriber Identification Module (SIM) card is a smart card found inside mobile phones that are based on the GSM family of protocols. The SIM card contains encryption keys, secures the users PIN on entry, and drives the phones menu. The Short Messaging Service (SMS) is a data messaging channel available on GSM phones. We assume an exchange rate of U.S.$1 = 85 Kenyan Schillings. Data from this table was pulled from the Central Bank of Kenya, Kenya Post Office Savings Bank, and Safaricom websites. Kenya has a total population of nearly 40 million, with 78 percent living in rural areas and a GDP per capita of U.S.$1,600. 19 percent of adults have access to a formal bank account. See FSDT (2009a) for financial access data derived from the FinAccess survey, a nationally representative survey of 6,600 households conducted in early 2009. Population figures are from the United Nations (2010) http://data.un.org/CountryProfile. aspx?crName=Kenya. M-PESA performance statistics are as of December 31, 2010 (http://www.safaricom.co.ke/ index.php?id=1073). Additional figures are taken from Safaricoms FY2010 results for the period ending May 31, 2010 and Central Bank of Kenya reports. GDP figure is from the World Development Indicators database, World Bank (July 2010).

The Capco Institute Journal of Financial Transformation


Mobile Payments Go Viral: M-PESA in Kenya

There are at least 27 companies using M-PESA for bulk distribution of payments. Safaricom itself used it to distribute dividends on Safaricom stock to 180,000 individual shareholders who opted to receive their dividends into their M-PESA accounts, out of a total of 700,000 shareholders.

ers said that it would have a large and negative effect on their lives, up from 85 percent. M-PESA users are increasingly using it to save: the percentage of users who say they use M-PESA to save has gone from 76 percent to 81 percent, and the percentage who say they save for emergencies has gone from 12 percent to 22 percent. Most of the uptick in saving behavior is due to early adopters saving more over time. This indicates that as users get familiar with the product, they are more likely to use it as a savings tool.

Since the launch of the bill pay function in March 2009, there are at least 75 companies using M-PESA to collect payments from their customers. The biggest user is the electric utility company, which now has roughly 20 percent of its one million customers paying through M-PESA.

M-PESA helps users deal with negative shocks: the researchers find that households who have access to M-PESA and are close to an agent are better able to maintain their level of consumption expenditures, and in particular food consumption, in the face of negative income shocks, such as job loss, livestock death, bad harvest, business failure or poor health. On the other hand, households without access to M-PESA are less able to absorb such adverse shocks. The researchers have been careful to rule out explanations based on mere correlation and are currently investigating the precise mechanisms that underlie this ability to spread risk.

At least two banks (Family Bank and Kenya Commercial Bank) are using M-PESA as a mechanism for customers to either repay loans or withdraw funds from their banks accounts.

In May 2010, Equity Bank and M-PESA announced a joint venture, MKESHO, which permits M-PESA users to move money between their MPESA mobile wallet and an interest-bearing Equity Bank account. While several hundred thousand customers have opened M-KESHO accounts, only a fraction of these are actively being used and it is unclear how aggressively Equity Bank and Safaricom are promoting this jointly-branded product.

M-PESAs service evolution


M-PESAs original core offering was the P2P payment enabling customers to send money to anyone with access to a mobile phone. It opened up a market for transactions which previously were handled largely informally through personal trips, friends, and public transport networks. That is represented by the set of transactions labeled personal networks in the middle of Figure 2. Many P2P transactions can be characterized as scheduled payments (such as sending a portion of salary earned at the end of the month to relatives back home), but many represent a basic form of finance, where people can draw on a much broader network of family members, friends, and business associates to access money as and when required. Thus, M-PESA not only introduces a large measure of convenience to transactions that were already occurring, but it also enables a basic form of financial protection for a large number of users by creating a network for instant, on demand payments. In recent months, Safaricom has increasingly opened up M-PESA to institutional payments, enabling companies to pay salaries and collect bill payments. In the future, Safaricom envisions increased use of M-PESA for in-store purchases. Thus, Safaricom intends for M-PESA to become a more pervasive retail payments platform, a strategy represented by the downward arrow in Figure 2. The challenge remains for M-PESA to become a vehicle for delivery of a broader range of financial services to the bulk of the Kenyan population, represented by the upward arrow in Figure 2. While some users are using M-PESA to save, there is likely to be a need to develop more targeted savings products that balance customers preference for liquidity and commitment, and which connect into a broader range of financial institutions. This is the journey M-PESA 171

Customer perspectives on M-PESA


William Jack of Georgetown University and Tavneet Suri of MIT recently released results from a panel survey that queried 2016 Kenyan households in August 2008 and resurveyed them in December 2009 [Jack and Suri (2010)]. The results show that M-PESA is steadily propagating down market, reaching a majority of Kenyas poor, unbanked, and rural populations:

Adoption of M-PESA has continued to march ahead, going from 44 percent of Kenyan households in 2008 to 70 percent in 2009. M-PESA has propagated down market: the share of poor households that are registered M-PESA users has gone from 28 percent in 2008 to 51 percent in 2009. (Here, the poor are defined as the poorest 50 percent of Kenyan households who earn, on average, about U.S.$3.40 per capita per day.) Similarly, the percent of rural households using M-PESA has gone from 29 percent to 59 percent, and the percent of unbanked households using M-PESA has gone from 25 percent to 50 percent.

Customers perceptions of M-PESA are steadily improving: the percentage of users who trust their agent was 95 percent in Round 2, compared to 65 percent in Round 1, even while the number of agents quadrupled during the period from 4,000 to 16,000. Customers reporting delays in withdrawing money fell from 22 percent to 16 percent, and the share of delays attributed to agents running out of liquidity fell from 70 percent to 30 percent. When asked about the hypothetical impact of M-PESA closing down, 92 percent of custom-

thus find it difficult to serve poor customers because the revenue from
M-PESAs role in promoting fuller financial inclusion Formal financial products Savings, credit, insurance Informal service providers Pawnbroker, money lender On-demand payments Personal networks Scheduled payments Remote B2C/C2B institutional payments Salaries, bill pay, G2P, online/e-commerce In-store merchant payments For goods and services

reinvesting small-value deposits is unlikely to offset the cost of serving these customers. In contrast, mobile operators in developing countries have developed a usage-based revenue model, selling prepaid airtime to poor customers in small increments, such that each transaction is profitable on a stand-alone basis. This is the magic behind the rapid penetration of prepaid airtime into low-income markets: a card bought is profit booked, regardless of who bought the prepaid card. This usage-based revenue model is directly aligned with the model needed to sustainably offer small-value cash-in/cash-out transactions at retail outlets and would make possible a true mass-market approach, with no incentive for providers to deny service based on minimum balances or intensity of use. Thirdly, M-PESA has demonstrated the importance of building a low-cost transactional platform which enables customers to meet a broad range of their payment needs. Once a customer is connected to an e-payment system, s/he can use this capability to store money in a savings account, send and receive money from friends and family, pay

Pushing and pulling money across time Just payments

M-PESA as a fuller retail payments platform

Figure 2 Potential range of transactions supported by M-PESA

bills and monthly insurance premiums, receive pension or social welfare payments, or receive loan disbursements and repay them electronically. In short, when a customer is connected to an e-payment system, his or her range of financial possibilities expands dramatically. Putting these elements together, M-PESA has prompted a rethink on the optimal sequencing of financial inclusion strategies. Where most financial inclusion models have employed credit-led or savings-led approaches, the M-PESA experience suggests that there may be a third approach focus first on building the payment rails on which a broader set of financial services can ride.

must be on for it to deliver on its promise of addressing the challenge of financial inclusion in Kenya. Safaricom will need to develop appropriate service, commercial, and technical models for M-PESA to interwork with the systems of other financial service providers. We return to this topic in the concluding section of this paper.

The broader significance of M-PESA


Before examining why M-PESA achieved such dramatic growth, we discuss briefly three top-line lessons that have emerged from M-PESAs success. Firstly, M-PESA has demonstrated the promise of leveraging mobile technology to extend financial services to large segments of unbanked poor people. This is fundamentally because the mobile phone is quickly becoming a ubiquitously deployed technology, even among poor segments of the population. Mobile penetration in Africa has increased from 3 percent in 2002 to 51 percent today, and is expected to reach 72 percent by 2014.10 And, happily, the mobile device mimics some of the key ingredients needed to offer banking services. The SIM card inside GSM phones can be used to authenticate users, thereby avoiding the costly exercise of distributing separate bank cards to low-profitability poor customers. The mobile phone can also be used as a point of sale (POS) terminal to initiate financial transactions and securely communicate with the appropriate server to request transaction authorization, thus obviating the need to deploy costly dedicated devices in retail environments. Secondly, M-PESA has demonstrated the importance of designing usage- rather than float-based revenue models for reaching poor customers with financial services. Because banks make most of their money by collecting and reinvesting deposits, they tend to distinguish between profitable and unprofitable customers based on the likely 172 size of their account balances and their ability to absorb credit. Banks

Accounting for M-PESAs success: three perspectives


The rest of this paper explores M-PESAs success from three angles. First, we examine the environmental factors in Kenya that set the scene for a successful mobile money development. Then, we examine the service design features that facilitated the rapid adoption and frequent use of M-PESA. And, finally, we examine the elements in Safaricoms execution strategy that helped M-PESA rapidly reach a critical mass of customers. In so doing, we draw extensively on a sequence of four papers which readers can refer to for more detailed accounts of the M-PESA story: Heyer and Mas (2009) on the country factors that led to M-PESAs success, Mas and Morawczynski (2009) on M-PESAs service features, Mas and Ngweno (2010) on Safaricoms execution, and Mas (2009) on the economics underpinning branchless banking systems. Beyond the compelling marketing, cold business logic and consistent execution of M-PESA, its success is a vivid example of how great things
10 Wireless Intelligence (www.wirelessintelligence.com).

The Capco Institute Journal of Financial Transformation


Mobile Payments Go Viral: M-PESA in Kenya

The individuals and institutions behind M-PESA


The idea of M-PESA was originally conceived by a London-based team within Vodafone, led by Nick Hughes and Susie Lonie. This team believed that the mobile phone could play a central role in lowering the cost of poor people accessing financial services. The idea was seized by the Safaricom team in Kenya, led by CEO Michael Joseph and Product Manager Pauline Vaughn. They toyed with the idea, convinced themselves of its power, developed it thoroughly prior to the national launch, and oversaw a very focused execution. The Central Bank of Kenya (CBK), and in particular its Payments System group led by Gerald Nyoma, deserves much credit for being open to the idea of letting a mobile operator take the lead in providing payment services to the bulk of the population. The CBK had recently been made aware of the very low levels of bank penetration in the country by the first FinAccess survey in 2006, and they were determined to explore all reasonable options for correcting the access imbalance. The CBK worked in close partnership with Vodafone and Safaricom to assess the opportunities and risks involved prior to the launch and as the system developed. They were conscious that premature regulation might stifle innovation, so they chose to monitor closely and learn, and formalize the regulations later. Finally, the U.K.s Department for International Development (DfID) played an instrumental role, first by funding the organizations that made the FinAccess survey possible, the Financial Sector Deepening Trust in Kenya, the FinMark Trust in South Africa, and then by providing seed funding to Vodafone to trial its earliest experiments with M-PESA. DfIDs role in spotlighting the need for mobile payments and funding the early risk demonstrates good roles for donor funding.

Strong latent demand for domestic remittances


Safaricom based the initial launch of the M-PESA service on the send money home proposition, even though it also allows the user to buy and send airtime, store value, and, more recently, to pay bills. Demand for domestic remittance services will be larger where migration results in splitting of families, with the breadwinner heading to urban centers and the rest of the family staying back home. This is the case in Kenya, where 17 percent of households depend on remittances as their primary income source [FSD-Kenya (2007a)]. In her study of M-PESA, Ratan (2008) suggests that the latent demand for domestic remittances is related to urbanization ratios. More propitious markets will be those where the process of rural-urban migration is sufficiently rooted to produce large migration flows, but not so advanced that rural communities are hollowed out. Countries with mid-range urbanization ratios (20 percent to 40 percent), especially those that are urbanizing at a rapid rate, are likely to exhibit strong rural-urban ties requiring transfer of value between them. This is the case in many African countries like Kenya and Tanzania, where the urbanization ratios are 22 percent and 25 percent, respectively.11 In the Philippines and Latin America, where urbanization ratios exceed 50 percent, remittances are more likely to be triggered by international rather than domestic migration patterns. Where entire nuclear families move, remittances will be stronger where there is cultural pressure to retain connection with ones ancestral village. In Kenya, migrants ties with rural homes are reinforced by an ethnic (rather than national) conception of citizenship. These links are expressed through burial, inheritance, cross-generational dependencies, social insurance, and other ties, even in cases where migrants reside more or less permanently in cities.12 In other settings, a greater emphasis on national as opposed to local or ethnic identity may have diminished the significance of the rural home and hence dampened domestic remittance flows.

happen when a group of leaders from different organizations rally around common challenges and ideas. The story of M-PESA straddles the social and the commercial, the public and the private, powerful organizations and determined individuals:

Poor quality of existing alternatives


Latent demand for e-payments must be looked at in the context of the accessibility and quality of the alternatives. If there are many good alternatives to mobile payments (as is typically the case in developed countries), it will be difficult to convince users to switch to the new service. In the Philippines, for example, the G-Cash and Smart Money mobile payment services experienced low take-up in part due to the availability of a competitive alternative to mobile payments an extensive and efficient semi-formal retail network of pawnshops which offered domestic remittance services at 3 percent.

Kenya country factors: unmet needs, favorable market conditions


The growth of M-PESA is a testament to Safaricoms vision and execution capacity. However, Safaricom also benefited from launching the service in a country which contained several enabling conditions for a successful mobile money deployment, including: strong latent demand for domestic remittances, poor quality of available financial services, a banking regulator which permitted Safaricom to experiment with different business models and distribution channels, and a mobile communications market characterized by Safaricoms dominant market position and low commissions on airtime sales.

11 U.N. Population Division: World Urbanization Prospects (2007). 12 For fuller analyses of the use of mobile money for domestic remittances in Kenya, see Ratan (2008) and Morawczynski (2008).

173

70 60 50 40 30 20 10 0 M-PESA Hand Bus Post office Direct deposit Money transfer service Other 2006 2009

deposited in a regulated financial institution, and reviewed the security features of the technology platform. In turn, the CBK allowed Safaricom to operate M-PESA as a payments system, outside the provisions of the banking law.13 Safaricom has had to pay a certain price for this arrangement. For instance, interest earned on deposited balances must go to a not-for-profit trust and cannot be appropriated by Safaricom or passed on to customers. There are also limits on transaction sizes (subsequently relaxed) to address anti-money laundering concerns. But, fundamentally, Safaricom was able to design the service as it saw fit, without having to contort its business model to fit within a prescribed regulatory model. The CBK has continued to support M-PESAs development, even in the face of pressure from banks. In late 2008, after a lobbying attack from the

Figure 3 Money transfer behavior before and after M-PESA

In Kenya, the most common channel for sending money before M-PESA was informal bus and matatu (shared taxi) companies. These companies are not licensed to transfer money, resulting in considerable risk that the money will not reach its final destination. And Kenya Post, Kenyas major formal remittance provider, is perceived by customers as costly, slow, and prone to liquidity shortages at rural outlets. Meanwhile, Kenyas sparse bank branch infrastructure (840 branches) is far too limited to compete with M-PESAs 28,000 cash-in/cash-out outlets. Figure 3 below illustrates how Kenyan households sent money before and after M-PESA [FSD-Kenya (2007a and 2009a)]. Note the dramatic reduction in the use of informal bus systems and Kenya Post to transfer money between 2006 and 2009. As noted above, M-PESAs early adopters were primarily banked customers, which suggests that M-PESA did not acquire its initial critical mass through competition with the formal sector but rather as a complement to formal services for clients who were wealthier, more exposed to formal financial service options, and less risk-averse. As services move deeper into the market, unbanked users will likely drive M-PESAs expansion, due to the competitive advantages of formal mobile offers over other options. This is one reason why Africa, with its high population of unbanked, is seen as such a promising market for mobile money deployments.

banking industry seeking to shut down the service, the Central Bank did an audit of the M-PESA service at the request of the Ministry of Finance and declared it safe and in line with the countrys objectives for financial inclusion.14 So far, the Central Bank appears justified in its confidence in M-PESA as there have been no major reports of fraud. And system downtime, although frequent, has not been catastrophic.

A dominant mobile operator and low airtime commissions


The chances of a mobile money scheme taking root depend also on the strength of the mobile operator within its market. Market share is an important asset because it is associated with a larger customer base for cross-selling the mobile money service, a larger network of airtime resellers which can be converted into cash-in/cash-out agents, stronger brand recognition and trust among potential customers, and larger budgets to finance the heavy up-front market investment needed to scale a deployment. With a market share of around 80 percent, Safaricom enjoyed each of these benefits when it launched M-PESA. A mobile money deployment will also have greater chance of success in countries where the commissions mobile operators pay airtime resellers are relatively low. This is because if commissions are too high, resellers will not be attracted by the lower commissions of the incipient cash-in/ cash-out business. In Safaricoms case, airtime commissions total 6 percent, of which 5 percent are passed on to the retail store. A 1 to 2 percent commission on a cash-in/out transaction is plausibly attractive the store need only believe that the cash business may be five times as big

A supportive banking regulator


Regulation of mobile money can help to secure trust in new mobile money schemes. At the same time, regulation may constrain the success of a mobile money deployment by limiting the scheme operators degrees of freedom in structuring the business model, service proposition, and distribution channels. In the case of M-PESA, Safaricom had a good working relationship with the Central Bank of Kenya (CBK) and was given regulatory space to design M-PESA in a manner that fit its market. The CBK and Safaricom worked out a model that provided sufficient pru174 dential comfort to the CBK. The CBK insisted that all customer funds be

13 The Central Bank of Kenya Act was amended in 2003 to give CBK broad oversight mandate over payment systems, but the operational modalities for its regulatory powers over payments systems have not been implemented, pending approval of a new National Payments System Bill which has languished in Parliament. 14 The results of the survey are explained in Okoth (2009).

The Capco Institute Journal of Financial Transformation


Mobile Payments Go Viral: M-PESA in Kenya

as the airtime business in volume terms. This seems reasonable, considering that the bulk of airtime sales are of low denominations (around U.S. 25).

four years later. Although people have proved creative in using M-PESA for their own special needs, sending money home continues to be one of the most important uses: the number of households receiving money in Kenya has increased from 17 percent to 52 percent since M-PESA was introduced.15

A reasonable base of banking infrastructure


Finally, the ability of M-PESA stores to convert cash to e-value for customers depends on how easily they can rebalance their liquidity portfolios. This will be more difficult to achieve if bank branch penetration is too low, as this will force the agent channel to develop alternative cash transport mechanisms. Thus, an agent network will need to rely on a minimal banking retail infrastructure. (This qualifies our earlier point that lack of access to formal services indicates a strong market opportunity. There appears to be a branch penetration sweet spot for mobile money, where penetration is not so high that it hampers demand for mobile money services, but not so low that agents are unable to manage their liquidity.) Kenya is reasonably well supplied with rural liquidity points due to the branch networks of Equity Bank and other banks and MFIs. Even so, shortage of cash or electronic value for M-PESA agents is a problem both in country and city. Other countries face more serious liquidity constraints, especially in rural areas, which is likely to be a major factor affecting the success of mobile services in specific country contexts.

A simple user interface


The simplicity of M-PESAs message has been matched by the simplicity of its user interface. The M-PESA user interface is driven by an application that runs from the users mobile phone. The service can be launched right from the phones main menu, making it easy for users to find. The menu loads quickly because it resides on the phone and does not need to be downloaded from the network each time it is called. The menu prompts the user to provide the necessary information, one prompt at a time. For instance, for a P2P transfer, the user will be asked to enter the destination phone number, the amount of the transfer, and the personal identification number (PIN) of the sender. Once all the information is gathered, it is fed back to the customer for final confirmation. Once the customer hits OK, it is sent to the M-PESA server in a single text message. Consolidating all information into a single message reduces messaging costs, as well as the risk of the transaction request being interrupted half-way through. A final advantage is that the application can use the security keys in the users SIM card to encrypt messages end-to-end, from the users handset to Safaricoms M-PESA server.

M-PESAs service design: getting people onto the system


While M-PESAs explosive growth was fueled by certain country-specific enabling conditions, the success of such an innovative service hinged on the design of the service. Conducting financial transactions through a mobile phone is not an intuitive idea for many people, and walking to a corner shop to conduct deposits and withdrawals may not at first seem natural to many. To overcome this adoption barrier, Safaricom had to design M-PESA in a way that (i) helped people grasp immediately how they might benefit from the service, (ii) removed all barriers that might prevent people from experimenting with the service; and (iii) fostered trust in the retail outlets who would be tasked with promoting the service, registering customers, and facilitating cash-in/cash-out services.

Removing adoption barriers: free to register, free to deposit, no minimum balances


Safaricom designed the scheme to make it as easy as possible for customers to try the new service. There is a quick and simple process for customer registration, which can be done at any M-PESA retail outlet. Customers pay nothing to register and the clerk at the outlet does most of the work during the process. First, the clerk provides a paper registration form, where the customer enters his or her name, ID number (from Kenyan National ID, Passport, Military ID, Diplomatic ID, or Alien ID), date of birth, occupation, and mobile phone number. The clerk then checks the ID and inputs the customers registration information into a special application in his mobile phone. If the customers SIM card is an old one that is not preloaded with the M-PESA application, the clerk replaces it. The customers phone number is not changed even if the SIM card is. Safaricom then sends both the customer and outlet an SMS confirming the transaction. The SMS gives customers a four-digit start key (onetime password), which they use to activate their account. Customers enter the start key and ID number, and they are then asked to input a secret PIN of their choice, which completes the registration process. In addition

A simple message targeting a big pain point


M-PESA was originally conceived as a way for customers to repay microloans. However, as Safaricom market-tested the mobile money proposition, the core proposition was shifted from loan repayment to helping people make P2P transfers to their friends and family. From its commercial launch, M-PESA has been marketed to the public with just three powerful words: send money home. This message was well adapted to the Kenyan phenomenon of split families discussed above and tapped into a major pain point for many Kenyans the risks and high cost associated with sending money over long distances. This basic e-remittance product became the must-have killer application that continues to drive service take-up and remains the main (though not only) marketing message

15 FinAccess Survey, FSDT (2009a), p 16.

175

to leading customers through this process, retail outlets explain how to use the application and the tariffs associated with each service. Such agent support early in the process is particularly important in rural areas, where a significant percentage of the potential user base is illiterate or unfamiliar with the functioning of their mobile phone. The minimum deposit amount was originally set at around U.S.$1.20 but has since been halved, and there is no minimum balance requirement. Customers can deposit money for free, so there is no immediate barrier to taking up the service. M-PESA charges customers only for doing something with their money, such as making a transfer, withdrawal, or prepaid airtime purchase.

a consistently positive view of the service. Safaricom maintains this control over the customer experience by investing heavily in store training and on-site supervision. Safaricom chose to centralize these functions in a single third-party vendor (Top Image) rather than relying on its channel intermediaries (i.e., master agents) to cascade these functions to retail shops. A Top Image representative visits each outlet at minimum on a monthly basis and rates each store on a variety of criteria, including visibility of branding and the tariff poster, availability of cash and M-PESA electronic value to accommodate customer transactions, and the quality of recordkeeping. Thirdly, customers receive instant SMS confirmation of their transaction, helping customers learn by experience to trust the system. The confirming SMS constitutes an electronic receipt, which can be used in dispute resolution. The receipt confirming a money transfer details the name and number of the recipient and the amount transferred. This allows the sender to confirm instantly that the money was sent to the right person the most common source of error. Finally, Safaricom requires its outlets to record all cash-in/cash-out transactions in a paper-based, Safaricom-branded logbook. For each transaction, the store clerk enters the M-PESA balance, the date, agent ID, transaction ID, transaction type (customer deposit or withdrawal), value, customer phone number, customer name, and the customers national ID number. Customers are then asked to sign the log for each transaction, which helps discourage fraud and also gives agents a way to offer first-line customer care for customers querying previous transactions. Each page in the log is in triplicate. The top copy is kept by the retail outlet for his own records, a second is passed on to the stores master agent, and the third is sent to Safaricom. Recall that all information contained in the agent log (except for the customer signature) is captured electronically by Safaricom when the transaction is made and is available to the master agents via their web management system. Hence, the main purpose of the agent log is not for recordkeeping, but to provide comfort to customers who are used to having transactions recorded on paper.

Being able to send money to anyone


M-PESA customers can send money to non M-PESA customers, including any person with a GSM mobile phone in Kenya, whether they are subscribers of Safaricom or of any of the other three competing networks (Airtel, Orange, and Yu). Under this service, money is debited from the senders account, and the recipient gets a code by SMS which she can use to claim the monetary value at any M-PESA store. Thus, it is an account-to-cash service, with the receivers experience being similar to how Western Union works today. The pricing on this service is interesting: customers pay a higher (roughly triple) P2P charge when sending money to a non-customer, but at the other end cashing out is free for a noncustomer, whereas registered customers pay a cash-out fee of at least U.S.$0.30. Why penalize the customer rather than the non-customer? Safaricom understood that the sender had power over the recipient, so it chose to put pressure on the sender to require the recipient to register with M-PESA. Furthermore, non-customers got a great first experience with M-PESA when they received money for free, which Safaricom hoped would convince them to register for M-PESA.

Building trust in the retail network


Safaricom recognized that M-PESA would not achieve rapid adoption unless customers had enough trust in the M-PESA retail network that they were willing to conduct cash-in/cash-out transactions through those outlets. Safaricom employed several measures to build that trust. Firstly, it closely linked the M-PESA brand to customers affinity with and trust in Safaricoms strong corporate brand. As the mobile operator in Kenya with a dominant share (over 80 percent at M-PESAs launch and scarcely less today), Safaricom was already a broadly respected and trusted brand, even among low-income customers. (M-PESA retail outlets are required to paint their store Safaricom green, which not only builds customers confidence that the store is acting on behalf of Safaricom, but also makes it easier for customers to locate cash-in/cash-out points.) Secondly, Safaricom ensured that customers can walk into any authorized retail outlet and have a remarkably similar experience. This has 176 helped to build trust in the platform and the outlets, and gives customers

Simple and transparent pricing


M-PESA pricing is made transparent and predictable for users. There are no customer charges for the SMSs that deliver the service, and instead fees are applied to the actual customer-initiated transactions. All customer fees are subtracted from the customers account, and outlets cannot charge any direct fees. Thus, outlets collect their commissions from Safaricom (through their master agents) rather than from customers. This reduces the potential for agent abuses. Customer fees are uniform nationwide, and they are prominently posted in all outlet locations. M-PESA chose to specify its fees in fixed currency terms rather than as a percentage of the transaction. This makes it easier for customers to understand the precise cost of each transaction and helps them think of

The Capco Institute Journal of Financial Transformation


Mobile Payments Go Viral: M-PESA in Kenya

the fee in terms of the transactions absolute value (i.e., sending money to grandmother). It also helps them compare the transaction cost against alternative and usually costlier money-transfer arrangements (i.e., the bus or matatu fare plus travel time). Deposits are free to customers. Withdrawals under U.S.$30 cost around 0.30. Withdrawal charges are banded (i.e., larger transactions incur a larger cost) so as not to discourage smaller transactions. ATM withdrawals using M-PESA are slightly more expensive than at a retail outlet (35 versus 30). P2P transfers cost a flat rate of around U.S.35. This is where Safaricom makes the bulk of its revenue. Thus, for a purely electronic transfer, customers pay more than double what they pay for the average cash transaction (15), despite the cost to provide being lower for purely electronic transactions than those involving cash. This reflects a notion of optimal pricing that is less based on cost and more on customer willingness to pay: enabling remote payments is the biggest customer pain point which M-PESA aims to address. M-PESA is cheaper than the other available mechanisms for making remote payments, such as money transfer by the bus companies, Kenya Posts Postapay, or Western Union.16 It is noteworthy that M-PESA largely maintained the same pricing for transactions in its first three years, despite the significant inflation experienced during the period. This helped establish customer familiarity with the service. Early on, Safaricom changed the pricing for two customer requests that do not involve a financial transaction: balance inquiries (because the initial low price generated an overly burdensome volume of requests) and PIN changes (because customers were far more likely to remember their PIN if the fee to change it was higher). The volume of both types of requests decreased substantially after these price changes. As noted earlier, the SMS confirmation of a transaction contains the available balance, which also helps cut down on the number of balance inquiries. More recently, M-PESA introduced new pricing tiers for very small (U.S.$ 0.60-1.20) and large (U.S.$ 400-800) transactions.

Yet M-PESAs liquidity system is not without its challenges. Due to cash float constraints, M-PESA retail outlets cannot always meet requests for withdrawals, especially large withdrawals. Furthermore, the agent commission structure discourages outlets from handling large transactions. As a result, customers are sometimes forced to split their transactions over a few days, taking money out in bits rather than withdrawing a lump sum, adding both cost and inconvenience. It also undermines customer trust in M-PESA as a mechanism for high-balance, long-term saving. Using bank branches and ATMs to give customers a sort of liquidity mechanism of last resort bolstered the credibility of the M-PESA system.

Execution: getting to critical mass, quickly


With a strong service design in place, Safaricom then set about developing its execution plan. It recognized that it would be difficult to scale MPESA incrementally as it had to overcome three significant hurdles that are common to any new electronic payment system, namely:

Adverse network effects the value to the customer of a payment system depends on the number of people connected to and actively using it. The more people on the network, the more useful it becomes.17 While network effects can help a scheme gain momentum once it reaches a critical mass of customers, they can make it difficult to attract early adopters in the early phase when there are few users on it.

Chicken-and-egg trap in order to grow, M-PESA had to attract both customers and stores in tandem. It is hard to sell the proposition to customers while there are few stores to serve them, and equally hard to convince stores to sign up while there are few customers to be had. Thus, the scheme needed to drive both customer and store acquisition aggressively.

Trust customers have to gain confidence in the reliability of a new system. In this case, customers had to be comfortable with three elements that were new at the time in Kenya: (i) a payment system that was operated by a mobile operator, (ii) going to non-bank retail outlets to meet their cash-in/cash-out needs, and (iii) accessing their account and initiating transactions through their mobile phone.

Liquidity of last resort at bank branches and ATMs


From very early on, M-PESA signed up banks as agents, so that any M-PESA customer could walk into the branches of several banks to conduct cash-in/cash-out transactions. One year after its launch, M-PESA went further and partnered with PesaPoint, one of the largest ATM service providers in Kenya. The PesaPoint network includes over 110 ATMs scattered all over the country, giving them a presence in all eight provinces. Customers can now retrieve money from any PesaPoint ATM. To do so, they must select ATM withdrawal from their M-PESA menu. They then receive a one-time ATM authorization code, which they enter on the ATM keyboard to make the withdrawal. No bank card is needed for this transaction. By accessing the PesaPoint ATM network, M-PESA customers can now make withdrawals at any time, day or night.

These problems reinforce each other in the early-stage development of a payments system, creating a significant hurdle to growth. We suspect this
16 In her field research, Olga Morawczynski finds that sending KSh 1,000 through M-PESA is 27 percent cheaper than the post offices PostaPay, and 68 percent cheaper than sending it via a bus company. See Morawczynski and Pickens (2009). 17 It has become habitual to illustrate network effects with reference to fax machines: the first set of people who bought a fax machine did not find it very useful as they could not send faxes to many people. As more people bought fax machines, everyones faxes became more and more useful. Network effects are sometimes referred to as demandside economies of scale, to emphasize that scale affects the value of the service to each customer. This distinguishes it from supply-side economies of scale, which refer to situations where average costs per customer fall as volume increases. Davidson (2009) discusses implications of network effects for mobile money.

177

hurdle helps explain why many other mobile money deployments remain sub-scale. M-PESA overcame this hurdle through very forceful execution on two key fronts: (i) Safaricom made significant up-front investments in building a strong service brand for M-PESA; and (ii) Safaricom effectively leveraged its extensive network of airtime resellers to build a reliable, consistent retail network that served customers liquidity needs.

at all outlets, supported with a few large billboards. Newer ads feature a general emotional appeal, with a wider range of services indicated.

A scalable distribution channel


Safaricom understood that the primary role of the mobile phone is to enable the creation of a retail outlet-based channel for cash-to-digital value conversion. And, for this cash-to-digital conversion to be broadly available to the bulk of the population, it had to develop a channel structure that could support thousands of M-PESA stores spread across a broad geography. To achieve this, Safaricom built four elements into its channel management execution strategy: (i) engaging intermediaries to help manage the individual stores, thereby reducing the number of direct contacts it had to deal with; (ii) ensuring that outlets were sufficiently incentivized to actively promote the service; (iii) maintaining tight control over the customer experience; and (iv) developing several different methods for stores to re-balance their stocks of cash and e-value. Two-tier channel management structure Safaricom created a two-

Aggressive up-front investment in promoting the M-PESA brand


From the beginning, Safaricom sought to foster customer trust in the new payment mechanism and relied on existing customers to be the prime mechanism to draw in new customers. This was all the more difficult because Safaricom was introducing not only a new product, but an entirely new product category to a market that had little experience with formal financial services. The internal launch target for M-PESA was 1 million customers within one year, equal to 17 percent of Safaricoms customer base of about 6 million customers at that time.
18

National launch at scale after small pilots involving less than 500 customers,19 M-PESA launched nationwide, increasing the likelihood that the service could reach a critical mass of customers in a short time frame. At launch, Safaricom had 750 stores and had made sure to cover all of Kenyas 69 district headquarters. It was a massive logistical challenge that led to a great deal of customer and store confusion and, in the first months after launch, several days delays to reach customer service hotlines. User and store errors were frequent since everyone was new to the service. But the gamble paid off. Logistical problems subsided after a few months, leaving strong brand recognition and top-of-mind awareness among large segments of the population. The service outran firstyear growth targets, quickly turning network effects in their favor as new customers begat more customers and turned M-PESA into a compelling business proposition for more stores. An appropriate marketing mix initial marketing featured and targeted the wealthier city dweller with the need to send money home. This choice of the richer urban dweller as the initial customer created an aspirational image for M-PESA and avoided the impression that it was a low-value product aimed at the poor. Over time, the marketing moved from young, upmarket urban dwellers with desk jobs to more ordinary Kenyans from lower-paid professions. While M-PESAs launch was associated with significant up-front investment in above-the-line marketing via TV and radio,20 there was also intense outreach through road shows and tents that traveled around the country signing people up, explaining the product, and demonstrating how to use it. Over time, as people became more familiar with the product and how to use it, it was no longer necessary to do this kind of hands-on outreach. 178 TV and radio were largely replaced by the omnipresent M-PESA branding

tier structure with individual stores (sub-agents, in Safaricoms parlance) who depended on master agents (referred to by Safaricom as Agent Head Offices [HO]). Agent HOs maintain all contact with Safaricom, and perform two key functions: (i) liquidity management (buying and selling M-PESA balance from Safaricom and making it available to individual stores under their responsibility), and (ii) distributing agent commissions (collecting the commission from Safaricom based on the overall performance of the stores under them and remunerating each store). Individual stores may be directly owned by an agent HO or may be working for one under contract. Incentivizing stores retail outlets will not maintain sufficient stocks of cash and e-money unless they are adequately compensated for doing so. Hence, Safaricom pays commissions to agent HOs for each cashin/cash-out transaction conducted by stores under their responsibility. Safaricom did not prescribe the commission split between agent HOs and stores, though most agent HOs pass on 70 percent of commissions to the store.21 For deposits under U.S. $30, Safaricom pays U.S. 11.8 in total commissions (pre-tax), of which U.S. 6.5 goes to the store after tax. For withdrawals, Safaricom pays U.S. 17.6 to the channel, of which U.S. 9.8 goes to the store. So, assuming equal volumes of deposits and withdrawals, the store earns U.S. 8.2 per transaction. Assuming the store conducts 60 transactions per day, it earns around U.S. $5.70 almost twice the prevailing daily wage for a clerk in Kenya.

18 Safaricom company results for the year ending March 2007. 19 The earliest pilot project conducted in 2004/05 revolved around microloan repayments, and involved the Commercial Bank of Africa, Vodafone, Faulu Kenya, and MicroSave, in addition to Safaricom. 20 A survey of 1210 users in late 2008 revealed that 70 percent of survey respondents claimed that they had first heard about M-PESA from advertisements, TV or radio. FSDT (2009b), p. 6. 21 Safaricom wants the split to be 20 percent/80 percent, thus passing more of the commission down to the retail outlet.

The Capco Institute Journal of Financial Transformation


Mobile Payments Go Viral: M-PESA in Kenya

Recall that Safaricom charges customers U.S. 30 (US 29.4 to be exact) on a round-trip savings transaction (free deposit plus U.S. 29.4 withdrawal), which is, in fact, equal to what the channel gets (U.S. 11.8 on the deposit + U.S. 17.6 on the withdrawal). So, assuming equal volumes of deposits and withdrawals, Safaricom does not make any money on cash transactions. It merely advances commissions to the channel when customers deposit, and recoups it when customers withdraw. By charging U.S. 35 on electronic P2P transactions (which are almost costless to provide), Safaricom opted to generate the bulk of its revenue from the service for which there is highest customer willingness to pay remote P2P payments. Because store revenues are dependent on the number of transactions they facilitate, Safaricom was careful not to flood the market with too many outlets, lest it depress the number of customers per agent. Instead, it maintained a balanced growth in the number of outlets relative to the number of active customers, resulting in an incentivized and committed agent base. Maintaining tight control over the customer experience Safaricom also recognized that customers need to have a good experience at the retail points, where the bulk of transactions take place. To ensure that it maintained control over the customer experience, Safaricom did not rely on the broad base of agent HOs to perform all channel management functions. Instead (as mentioned above), it concentrated the evaluation, training, and on-site supervision of stores in a single outsourcing partner, Top Image. Thus, we see that Safaricom delegated the more routine, desk-bound, non-customer-facing store support activities (i.e., liquidity management, distribution store commissions) to a larger pool of agent HOs. At the same time, through its contract with Top Image, it retained direct, centralized control over the key elements of the customer experience (i.e., store selection, training, supervision). Developing multiple store liquidity management methods by far the biggest challenge faced by M-PESA stores is maintaining enough liquidity in terms of both cash and e-float to be able to meet customer requests for cash-in and cash-out. If they take too many cash deposits, stores will find themselves running out of e-float with which to facilitate further deposits. If they do too many withdrawals, they will accumulate e-float but will run out of cash. Hence, they frequently have to rebalance their holdings of cash versus e-float. This is what we refer to as liquidity management. The M-PESA channel management structure was conceived to offer stores three methods for managing liquidity. Two of these place the agent HO in a central role, with the expectation that the agent HO will recycle e-float between locations experiencing net cash withdrawals (i.e., accumulating e-float) and locations with net cash deposits (i.e., accumulating cash). We discuss each of these methods in turn:

Agent HO provides direct cash support to stores under this option, the store clerk comes to the agent HOs head office to deliver or offload cash, or the agent HO sends cash runners to the store to perform these functions (not very common).

Agent HO and stores use their respective bank accounts under this option, if the store has excess cash and wants to buy M-PESA e-float from the agent HO, the store will deposit the cash into the account of the agent HO at the nearest bank branch or ATM. Once the agent HO confirms receipt of the funds into its account, the HO transfers M-PESA e-float to the stores M-PESA account. If the store wants to sell e-float to get cash, the store transfers M-PESA e-float to the agent HO. The agent HO then deposits (or transfers) money into the stores account at the branch of the stores bank. The store can then withdraw the cash at the nearest branch or ATM.

Stores interact directly with a bank that has registered as an M-PESA superagent under this option, the agent HO does not get involved in liquidity management. Instead, stores open an account with a participating superagent bank. To rebalance their cash, stores deposit and withdraw cash against their bank account at the nearest branch or ATM of the bank. The store then electronically buys and sells e-float in real time against their bank account. From a stores perspective, one drawback of the bank-based superagent mechanism is that it can only use it during banking business hours. This presents a problem for stores in the evenings and on weekends.

The e-float-cash nexus will remain the key constraint to the further development of M-PESA since it requires the physical movement of cash around the country and is thus the least scalable part of the system.

M-PESAs future evolution


The experience of M-PESA demonstrates how powerful a payment network that offers convenience at an affordable cost can be once a critical mass of customers is reached. It also shows that achieving critical mass requires both a service design that removes as many adoption barriers as possible and significant investment in marketing, branding, and agent network management. The Kenyan experience also suggests that several country-level environmental factors need to align to set the scene for a successful mobile money development, including the labor market profile (demand for remittances generated by rural-urban migration), the quality of available financial services, support from the banking regulator, and the structure of the mobile communications market (dominant mobile operator and low airtime commissions). Yet, while M-PESA has been successful beyond what anyone could have imagined at its launch, the model still has substantial room to develop further. Our wish list for M-PESA is three-fold: (i) the mainstreaming of M-PESAs regulatory treatment; (ii) pricing that opens up a much larger market of micro-transactions; and (iii) building of a much more robust 179

ecosystem around M-PESA that enables customers to access a broader range of financial services. We address each of these below, before offering some concluding thoughts on how M-PESA offers a rekindled vision for achieving financial inclusion in developing countries.

create a category of street-level sub-agents, characterized by lower costs and commissions than store-based agents. Sub-agents would be a kind of e-susu collector, operating with small working capital balances in order to aggregate small customer transactions. Subagents would use normal M-PESA retail outlets to rebalance their cash and M-PESA stored value. The key principle here is that segmentation of customers needs to go hand-in-hand with segmentation of agents.

Mainstreaming M-PESAs regulatory treatment


M-PESAs regulatory treatment as a payments vehicle needs to be formalized so that it can become regulated in the most appropriate way. To this end, the CBK has been trying to get a new payments law enacted by Parliament, but the draft has not yet been approved. The intention is for M-PESA to be covered in future by regulations emanating from this payments law. The CBK issued new agent banking regulations in early 2010 that allowed commercial banks to use retail outlets as a delivery channel for financial services. This gave banks the possibility of replicating the M-PESA service themselves. However, the requirements on both banks and their agents are more onerous than what applies to Safaricom (a non-bank) and its agents. In early 2011, the CBK issued e-money and payment service provider guidelines which incorporate less restrictive agent regulations. By applying for agents under these guidelines rather than as banking agents, banks can now deploy agents on terms similar to Safaricoms.

Linking with banks and other institutional partners to offer a fuller range of financial services
While some customers use M-PESA as a savings device, it still falls short of being a useful savings proposition for most poor people. According to the January 2009 CBK audit of M-PESA, the average balance on MPESA accounts was around U.S.$3. This is partly a large number problem: if 900,000 people used M-PESA to save, that would only be 10 percent of users and their savings would be diluted within an average savings balance. But the fundamental problem is that there is still a lot of conversion of electronic value back into cash, say following receipt of a domestic remittance. We attribute this to a combination of factors: Lack of marketing Safaricom does not want to publicly promote the savings usage of M-PESA for fear of provoking the Central Bank into tighter regulation of M-PESA.

Pricing that enables smaller payments


M-PESAs current pricing model is not conducive to small transactions. A U.S.$10 P2P transfer plus withdrawal, for example, costs around 6.5 percent of the transaction size (U.S.0.35 for the transfer plus U.S.0.30 for the withdrawal). We see two advantages to adjusting M-PESAs current pricing model to make it work for smaller-denomination transactions:

Customer pricing there is a flat fee of around U.S. 30 for withdrawals under U.S.$30, which means that small withdrawals carry a large percent fee. Product design M-PESA works very much like an electronic checking account, and does not offer structured saving products which may help people build discipline around savings.

It would make the service accessible to a poorer segment of the population, for whom pricing is now too high given their transactional needs. This would allow Safaricom to maintain customer growth once saturation starts to set in at current pricing.

Inflation M-PESA does not pay interest. In an environment with 15 percent inflation (during its first full year of operation in 2008), this may be too onerous for savings.

It would allow customers to use M-PESA for their daily transaction needs, and in particular to save on a daily basis when they are paid daily.

Trust deposits are not supervised by the Central Bank. And unlike payments, where trust can be validated experientially in real time, savings requires trust over a longer period of time.

Privacy people may want more privacy in their savings behavior than an agent provides. Excess liquidity 23,400 cash-in points are also 23,400 cash-out points. The ubiquity of M-PESA agents may make it too easy for customers to cash-out their funds, thus limiting their ability to accumulate large balances.

A reduction in customer prices could come about in several ways:


For electronic transactions, the current P2P charge of U.S. 35 allows for substantial scope for price reductions. But let us be careful. There is a compelling logic behind the current model of extracting value from remote payments (for which there is substantial customer willingness to pay), while maintaining tight pricing on cash transactions (for which customers are less willing to pay). But we do believe there is room for tranching the P2P fee so that the price works for smaller (i.e., daily) transactions.

Rather than expecting Safaricom to develop and market richer savings services, we believe that M-PESA should support savings propositions by linking into banks. M-PESA would then become a massive transaction acquisition network for banks rather than an alternative to them. Safaricom is beginning to connect with banks. In May 2010, for example,

180

For cash transactions, one way to enable lower fees would be to

The Capco Institute Journal of Financial Transformation


Mobile Payments Go Viral: M-PESA in Kenya

Equity Bank and M-PESA announced a joint venture, M-KESHO, which permits M-PESA users to move money between their M-PESA mobile wallet and an interest-bearing Equity Bank account. M-PESA would also benefit from establishing further linkages with institutions beyond banks, such as billers, distributors, and employers. By promoting M-PESA as a mechanism for distributing salaries and social welfare payments, enabling payments across supply chains, and paying bills, the need for cash-in and cash-out would be minimized, and, as a result, a key component of transaction costs could be reduced. We also suspect savings balances would be higher if people received payments directly into their account rather than in cash, and if they had more useful things they could do with their money in electronic form.

mechanisms to manage their financial lives. They have few options, but you need to deploy all your ingenuity to use them all, precisely because none are very good. Some are not very safe because of their sheer physicality. If you save by storing your grain or buying goats, when your village hits hard times you may not be able to find ready buyers for your grain or goats. Forget about getting loans from neighbors during hard times. The local moneylender runs a quasi-monopoly in the village because it is too costly for you to go to other moneylenders in other villages, and in any case they do not know you there. So you end up paying dearly for a loan. We estimate that over 2 billion people need to cope with such circumstances. The lack of good financial options is undoubtedly one of the reasons why poor people are trapped in poverty. They cannot sustain or even aspire to higher income because they are not able to invest in better farming tools and seeds to enhance their productivity, start a microenterprise, or even take the time to search for better paying employment opportunities. Their income is volatile, often fluctuating daily, so without reliable ways of pushing and pulling money between good days and bad days they may have to face the stark decision to pull the kids out of school or put less food on the table during bad patches. And without good financial tools they may not be able to cope with shocks that set them back periodically. Most of these shocks are foreseeable if not entirely predictable: a drought, ill-health, lifecycle events such as marriage and death. Cash is the main barrier to financial inclusion. As long as poor people can only exchange value in cash or, worse, physical goods they will remain too costly for formal financial institutions to address in significant numbers. Collecting low-value cash deposits and redeeming their savings back into small sums of cash requires a costly infrastructure which few banks are willing to make extensive in low-income or rural areas. But once poor people have access to cost-effective electronic means of payments such as M-PESA, they could, in principle, be profitably marketable subjects by a range of financial institutions. M-PESA itself does not constitute financial inclusion. But it does give us glimpses of a commercially sound, affordable and effective way to offer financial services to all.

Concluding thoughts: how M-PESA can reinvigorate visions around financial inclusion
Imagine a world where banks are nowhere near where you live. The nearest branch is 10 kilometers away, but it takes you almost an hour to get there by foot and bus because you do not have your own wheels. With waiting times at the branch, that is a round-trip of two hours a quarter or so of your working day gone. The bus fare is only 50 cents, but that is one quarter of what you make on a good day. So each banking transaction costs you the equivalent of almost half a days wages. It would be like an ATM charging us something like U.S.$50 for each transaction, given what we earn. Then, imagine a world without credit instruments or electronic payments. No checks, no cards, no money orders, no direct debits, no internet banking. All your transactions are done in cash or, worse, by bartering goods. All exchanges are physical, person-to-person, hand-to-hand. Consider the hassle and the risk of sending money to distant relatives, business partners, or banks. How would you operate in such a world? A recent book, Portfolios of the poor, [Collins et al. (2009)] has documented how poor people cope. How they save to push some excess money from today to tomorrow, how they borrow to pull tomorrows money to fund some needed expense today. You store some cash in the home to meet daily needs, you park it with a trusted friend for emergencies, you buy jewelry because that represents a future for your children, you pile up some bricks for the day when you can build an extra room in your house. You make regular contributions to a savings group with a circle of friends to build up a pot, and one day it will be your turn to take that pot home to buy new clothes. You also borrow from friends, seek advances from your employer, pawn some of your jewelry, and go to the moneylender. The authors of Portfolios of the poor document some poor families across India, Bangladesh, and South Africa using up to 14 different

References

Camner, G. and E. Sjblom, 2009, Can the success of M-PESA be repeated? A Review of implementations in Kenya and Tanzania, Valuable Bits note, July Collins, D., J. Morduch, S. Rutherford, and O. Ruthven, 2009, Portfolios of the poor: how the worlds poor live on $2 a day,Princeton University Press Davidson, N., 2009, Tactics for tipping markets: influence perceptions and expectations, GSM Association, Mobile money for the unbanked blog, November 15 Financial Sector Deepening Trust, FSDT, 2007a, Finaccess Kenya 2006: results of a financial survey on access to financial services in Kenya Financial Sector Deepening Trust [FSDT], 2007b, Key findings of the FinScope survey in Tanzania in 2006

181

Financial Sector Deepening Trust [FSDT], 2009a, FinAccess national survey 2009: dynamics of Kenyas changing financial landscape, June Financial Sector Deepening Trust [FSDT], 2009b, Research on mobile payments experience: M-PESA in Kenya, Unpublished draft, December GSM Association, 2009, Wireless intelligence database, Available at www.wirelessintelligence. com Heyer, A. and I. Mas, 2009, Seeking fertile grounds for mobile money, unpublished paper Hughes, N. and S. Lonie, 2009, M-PESA: mobile money for the unbanked, Innovations, special edition for the Mobile World Congress 2009, MIT Press Isaacs, L., 2008, IAMTN presentation, MMTA conference Johannesburg, May Jack, B. and T. Suri, 2011, The economics of M-PESA, Working paper, http://www.mit. edu/~tavneet/M-PESA-Final.pdf Juma, V., 2009, Family bank offers new service linking accounts to M-PESA, Business Daily,18 December Kimenyi, M. and N. Ndungu, 2009, Expanding the financial services frontier: lessons from mobile phone banking in Kenya, Brookings Institution, October Kinyanjui, K., 2009, Yu launches new mobile cash transfer platform, Business Daily, 16 December Mas, I., 2008, M-PESA vs. G-Cash: accounting for their relative success, and key lessons for other countries, CGAP, unpublished, November Mas, I., 2008b, Realizing the potential of branchless banking: challenges ahead, CGAP focus note 50 Mas, I., 2009, The economics of branchless banking, Innovations, Volume 4, Issue 2, MIT Press, Spring Mas, I. and O. Morawczynski, 2009, Designing mobile money services: lessons from M-PESA, Innovations, Vol. 4, Issue 2, MIT Press, Spring Mas, I. and A. Ngweno, 2010, Three keys to M-PESAs success: branding, channel management and pricing, Journal of Payments Strategy and Systems, Vol. 4, No. 4 Mas, I. and S. Rotman, 2008, Going cashless at the point of sale: hits and misses in developed countries, CGAP focus note 51 Morawczynski, O., 2008, Surviving in the dual system: how M-PESA is fostering urban-to-rural remittances in a Kenyan slum, HCC8 conference proceedings, Pretoria Morawczynski, O., 2009, Exploring the usage and impact of transformational m-banking: the case of M-PESA in Kenya, Unpublished draft Morawczynski, O. and G. Miscione, 2008, Examining trust in mobile banking transactions: the case of M-PESA in Kenya, in: C. Avgerou, M. Smith, and P. van den Besselaar (eds.) Social dimensions of information and communication technology policy: proceedings of the 8th International Conference on Human Choice and Computers HCC8, International Federation for Information Processing TC 9, Pretoria, South Africa, September 25-26, Volume 282, 287298 Morawczynski, O. and M. Pickens, 2009, Poor people using mobile financial services: observations on customer usage and impact from M-PESA, CGAP Brief, August Okoth, J., 2009, Regulator gives M-PESA a clean bill of health, The Standard, 27 January Okuttah, M., 2009, Safaricom changes method of recruiting M-pesa agents, Business Daily, 23 December Ratan, A. L., 2008, Using technology to deliver financial services to low-income households: a preliminary study of Equity Bank and M-PESA customers in Kenya, Microsoft Research Technical Report, June Safaricom, 2011, MPESA key performance statistics, available on (http://www.safaricom. co.ke/index.php?id=1073) and various financial reports available on www.safaricom.co.ke

182

The Capco Institute Journal of Financial Transformation


A General Structural Approach For Credit Modeling Under Stochastic Volatility

183

Guidelines for Manuscript Submissions


Guidelines for authors
In order to aid our readership, we have established some guidelines to ensure that published papers meet the highest standards of thought leadership and practicality. The articles should, therefore, meet the following criteria: 1. Does this article make a significant contribution to this field of research? 2. Can the ideas presented in the article be applied to current business models? If not, is there a road map on how to get there. 3. Can your assertions be supported by empirical data? 4. Is my article purely abstract? If so, does it picture a world that can exist in the future? 5. Can your propositions be backed by a source of authority, preferably yours? 6. Would senior executives find this paper interesting? Where tables or graphs are used in the manuscript, the respective data should also be provided within a Microsoft Excel spreadsheet format. Subjects of interest All articles must be relevant and interesting to senior executives of the leading financial services organizations. They should assist in strategy formulations. The topics that are of interest to our readership include: Footnotes should be double-spaced and be kept to a minimum. They should Impact of e-finance on financial markets & institutions Marketing & branding Organizational behavior & structure Competitive landscape Operational & strategic issues Capital acquisition & allocation Structural readjustment Innovation & new sources of liquidity Leadership Financial regulations Financial technology For books Copeland, T., T. Koller, and J. Murrin, 1994, Valuation: Measuring and Managing the Value of Companies. John Wiley & Sons, New York, New York For contributions to collective works Ritter, J. R., 1997, Initial Public Offerings, in Logue, D. and J. Seward, eds., Manuscript submissions should be sent to Prof. Shahin Shojai, Ph.D. The Editor Editor@capco.com Capco Broadgate West 9 Appold Street London EC2A 2AP Tel: +44 207 426 1500 For unpublished material Gillan, S. and L. Starks, 1995, Relationship Investing and Shareholder Activism by Institutional Investors. Working Paper, University of Texas For periodicals Griffiths, W. and G. Judge, 1992, Testing and estimating location vectors when the error covariance matrix is unknown, Journal of Econometrics 54, 121-138 Warren Gorham & Lamont Handbook of Modern Finance, South-Western College Publishing, Ohio For monographs Aggarwal, R., and S. Dahiya, 2006, Demutualization and cross-country merger of exchanges, Journal of Financial Transformation, Vol. 18, 143-150 be numbered consecutively throughout the text with superscript Arabic numerals. The first page must provide the full name(s), title(s), organizational affiliation of the author(s), and contact details of the author(s). Contact details should include address, phone number, fax number, and e-mail address. Please note that formatting in italics or underline will not be reproduced, and that bold is used only in subtitles, tables and graphs. All manuscripts should be submitted by e-mail directly to the editor@capco. com in the PC version of Microsoft Word. They should all use Times New Roman font, and font size 10. Accompanying the Word document should be a PDF version that accurately reproduces all formulas, graphs and illustrations. Manuscripts should not be longer than 7,000 words each. The maximum number of A4 pages allowed is 14, including all footnotes, references, charts and tables.

Manuscript guidelines
All manuscript submissions must be in English.

184

Fax: +44 207 426 1501

Request for Papers Deadline October 14th, 2011


The world of finance has undergone tremendous change in recent years. Physical barriers have come down and organizations are finding it harder to maintain competitive advantage within todays truly global market place. This paradigm shift has forced managers to identify new ways to manage their operations and finances. The managers of tomorrow will, therefore, need completely different skill sets to succeed. It is in response to this growing need that Capco is pleased to publish the Journal of financial transformation. A journal dedicated to the advancement of leading thinking in the field of applied finance. The Journal, which provides a unique linkage between scholarly research and business experience, aims to be the main source of thought leadership in this discipline for senior executives, management consultants, academics, researchers, and students. This objective can only be achieved through relentless pursuit of scholarly integrity andadvancement. It is for this reason that we have invited some of the worlds most renowned experts from academia and business to join our editorial board. It is their responsibility to ensure that we succeed in establishing a truly independent forum for leading thinking in this new discipline. You can also contribute to the advancement of this field by submitting your thought leadership to the Journal. We hope that you will join us on our journey of discovery and help shape the future of finance.

Prof. Shahin Shojai Editor@capco.com

For more info, see opposite page 2010 The Capital Markets Company. VU: Prof. Shahin Shojai, Prins Boudewijnlaan 43, B-2650 Antwerp All rights reserved. All product names, company names and registered trademarks in this document remain the property of their respective owners.

185

Layout, production and coordination: Cypres Daniel Brandt, Kris Van de Vijver and Pieter Vereertbrugghen Graphic design: Buro Proper Bob Goor Photographs: Bart Heynen 2011 The Capital Markets Company, N.V. All rights reserved. This journal may not be duplicated in any way without the express

186

written consent of the publisher except in the form of brief excerpts or quotations for review purposes. Making copies of this journal or any portion there of for any purpose other than your own is a violation of copyright law.

Minimise risk, optimise success

With a Masters degree from Cass Business School, you will gain the knowledge and skills to stand out in the real world.

MSc in Insurance and Risk Management


Cass is one of the worlds leading academic centres in the insurance field. What's more, graduates from the MSc in Insurance and Risk Management gain exemption from approximately 70% of the examinations required to achieve the Advanced Diploma of the Chartered Insurance Institute (ACII). For applicants to the Insurance and Risk Management MSc who already hold a CII Advanced Diploma, there is a fast-track January start, giving exemption from the first term of the degree. To find out more about our regular information sessions, the next is 10 April 2008, visit www.cass.city.ac.uk/masters and click on 'sessions at Cass' or 'International & UK'. Alternatively call admissions on:

+44 (0)20 7040 8611

Amsterdam Antwerp Bangalore Chicago Frankfurt Geneva London New York paris San Francisco Toronto Washington, D.C. Zurich
CApCo.Com

You might also like