Internal audit in financial services: a long time to wait for not very much

as-you-like-it
“Time is the old justice that examines all such offenders, and let Time try."As You Like It, Act IV, Scene 1 
 
 
As Canadian consultant Tim Leech pointed out in an ACCA column in 2009, internal auditors really didn’t have a good financial crisis.  Quite validly, Tim asked the question:

Not being fingered for even a portion of the blame in a catastrophic situation is a good thing for the internal audit profession, isn’t it?

His answer to this rhetorical question strikes at the heart of the utility and effectiveness of internal audit:

. . . the absence of even mild criticism of the internal audit profession is an indictment of the profession’s track record assessing and reporting on the effectiveness of their client’s risk management systems to help prevent catastrophic risk and control governance failures before they occur.

Although it sometimes seems like much longer, it is approaching six years since the global financial crisis started to unfold.  On 2 April 2007, the United States’ second largest mortgage originator, New Century Financial Corp of Irvine, California filed for relief under Chapter 11 of the United States Bankruptcy Code in Wilmington, Delaware.  The rot had begun to show.

The post mortems began to appear in earnest in early 2009, once the true scale of the impact of the US Treasury’s decision to allow Lehman Bros to fail in October 2008 became apparent.  As Tim Leech pointed out in in his column in February of that year, none of those post mortems sought to blame failings by internal auditors.  The first major industry review, the 2008 report of the Institute of International Finance, for example, barely referred to internal audit or the practice of internal auditing.

Similarly, in the UK, Sir David Walker’s 2009 review was scarcely resplendent with references to internal audit.  His logic for this was hardly flattering:

. . . failures that proved to be critical for many banks related much less to what might be characterised as conventional compliance and audit processes, including internal audit, but to defective information flow, defective analytical tools and inability to bring insightful judgement in the interpretation of information and the impact of market events on the business model.

However, with the passage of time, attention has turned to internal audit.  In mid 2012, the Basel Committee on Banking Supervision (BCBS) of the Bank of International Settlements revised its 2001 document on the role of banks’ internal audit functions and their supervision.  Its statement on the purpose of internal audit in banks is included as Principle 1:

An effective internal audit function provides independent assurance to the board of directors and senior management of the quality and effectiveness of a bank’s internal control, risk management and governance systems and processes, thereby helping the board and senior management protect their organisation and its reputation.

That’s pretty clear.  But how, precisely, will it do that?  The BCBS guidance is largely silent on the output from internal audit activity but does refer to review by both the audit committee and supervisors of internal audit reports.

Revelling in its chartered status, the UK’s Chartered Institute of Internal Auditors (hereafter UK Institute) has also recently reviewed the role of internal audit in banking, publishing a consultation document in February 2013.  Departing from recent practices, the UK Institute’s review advocates that the director of internal audit report to the firm’s chairman (noting it may be delegated to the chair of the audit committee).  It’s their guidance, they can, after all, recommend what they like.

More notably, the document states that the role of internal audit:

should be to help to protect the assets, reputation and sustainability of the organisation.

Hmmm.  This differs materially from the BCBS expectation around provision of assurance, although it may, of course, encompass the BCBS requirement for assurance also.

Interestingly, the UK Institute’s expectation of the focus of reporting also differs from the BCBS’ view, and includes:

at least annually, an assessment of the overall effectiveness of the governance, and risk and control framework of the organisation, together with an analysis of themes and trends emerging from Internal Audit work and their impact on the organisation’s risk profile.

That is, internal audit should prepare a periodic opinion on effectiveness of the control framework.  That is not an opinion on control, per se, but on the framework surrounding control.  In addition, the UK Institute’s document advocates including within internal audit’s scope of work, inter alia, “the setting of and adherence to risk appetite” and “the risk and control culture of the organization.”   Nowhere in the document are these terms explained or are methods for forming an opinion thereon offered.

Not everyone is a fan of periodic control opinions.  Tim Leech, for one, has written and spoken against them repeatedly.  As he noted in the ACCA piece:

The fact that more than one in every eight Sarbanes-Oxley section 404 control effectiveness opinions from management and external auditors in 2006 were later found, as a result of restatements of the financial statements, to be materially wrong should raise serious questions about the ability of auditors today, both internal and external, to form reliable conclusions on control effectiveness.

Usefully (well, not really), the latest revision to the IIA global standards differentiates between an engagement opinion and an overall opinion.  IIA is clearly leaving the door open for the growth of control opinions, thereby catching up with the reality of the post-Sarbanes-Oxley world.  But opinion over what?

Tim Leech favours reporting on the effectiveness of risk management systems.  As he said:

I believe without reservation that reporting on the current effectiveness of risk management systems is significantly more valuable than providing subjective opinions on the effectiveness of control.

The crux of Tim’s argument is that

management and auditors currently lack the necessary assessment frameworks, training and tools to provide reliable, repeatable conclusions on control effectiveness.

Yet I cannot see that such frameworks are any more clearly developed in relation to firms’ management of risk.  And certainly not in “the setting of and adherence to risk appetite” and “the risk and control culture of the organization.”  Tim elsewhere has advocated use of ISO 31000 standard as the basis for risk frameworks but the reality is that this standard has many detractors, me included, and offers no useful insight on either of these difficult topics.  One such detractor is Bob Kaplan of Balanced Scorecard fame who argues (see here) that we are not yet ready for standards in risk management and that there are dangers in doing so:

[I]n an environment with limited knowledge and experience, premature standard setting will inhibit innovation, exploration and learning.

The IIA itself notes the problems confronting internal auditors examining risk frameworks:

[I]nternal auditors who seek to extend their role in ERM [should not] underestimate the risk management specialist areas of knowledge (such as risk transfer and risk quantification and modeling techniques) which are outside of the body of knowledge for most internal auditors. Any internal auditor who cannot demonstrate the appropriate skills and knowledge should not undertake work in the area of risk management.

The reality is that internal auditors’ knowledge, and knowledge more generally, in risk and internal control falls well short of the level necessary to produce comprehensive, reliable and replicable opinions on the performance either of firms’ risk management or of their internal control.   A key problem is the assumption of the value of standardization, as Kaplan states.  The rush to claim authority by COSO, by the PCAOB or SEC or by ISO simply inhibits innovation, exploration and learning by firms whose differing contexts and environments may well dictate different solutions to frameworks in risk management or in internal control.  Regulatory mandate should not be confused for authoritative knowledge.

In the area of internal control, for example, arguably the best work is by another Harvard scholar, Robert Simons, whose 1995 levers of control model represented a far broader approach than the subsequent, accounting-driven SEC versions of internal control.  It encapsulates many of the enfants bâtards that are now emerging around behaviour and control.

Instead of adopting dirigiste approaches of closing off innovation, standards-setters, regulators and professional bodies should be adhering to the Maoist dictum of “let a hundred flowers blossom.”  Academics should be supporting or even driving that innovation rather than falsely or prematurely asserting authority, as so many have done, especially in relation to risk management.  Research funding agencies in the UK should be supporting such innovation rather than being gulled in to believing there are singular answers to complex questions in risk management and internal control.

In the meantime, risk managers and internal auditors (and regulators, themselves) are left with a dilemma: how to proceed when there is regulatory pressure to enhance management practice in areas where there are not established or reliable bodies of knowledge?  ‘Carefully,’ and ‘with as much knowledge as possible’, would be my suggestions.  This will require a considerably greater emphasis on investing time and effort to acquire knowledge and insight, as opposed to cataloguing of other firms’ practices, than has been in evidence to date.

While, in the UK, the FRC may be on the verge of requiring greater attention to quantitative and integrative risk management practice than previously, the best argument for better knowledge and practice remains one of improving performance.  As recent US research by Booz&Co. shows, underestimating strategic risk is the principal cause of shareholder value destruction.  Addressing firms’ comparative advantages in risk-assumption and risk-bearing are existential requirements for all corporate firms; they cannot afford to wait for internal risk managers and internal auditors to catch up.  But catch up, in time, they must – or risk losing both their credibility and professional designations.

Our programme of training in risk management and assurance topics in March and April covers interview skills, enterprise risk management, risk in programmes & projects, culture & risk culture and strategy, risk & uncertainty.  For more information see here.

1.5 billion reasons to improve your interviewing skills

Microphone landscape

UBS, the giant Swiss banking corporation, has been fined the equivalent of USD 1.5 billion by supervisors in the US, UK and Switzerland in the latest (but far from the last) chapter of the LIBOR rigging scandal, said to involve investigations against more than a dozen major investment banks.  These fines dwarf earlier fines levied against Barclays concerning LIBOR returns. The details of the violations and findings of the FSA’s investigation are contained in a 38-page notice issued by the FSA today.  It makes for interesting and somewhat depressing reading.  However, one aspect deserves particular attention.

Under a section titled “the failures of UBS’ systems and controls,” the notice reports that, between January and May 2009, UBS’ Group Internal Audit undertook a limited-scope review of its short-term interest rate (STIR) desks.  Coming after Wall Street Journal articles in April & May 2008 which raised concerns about LIBOR, the review consisted of a walk-through of its procedures and review of exception reporting.  The report from this review “did not consider and contained no reference to UBS’s LIBOR submission process.”

Shortly after the internal audit review, following a “regulatory enquiry in June 2009,” UBS Legal and Compliance function “reviewed the procedures for the LIBOR submission process.”

The notice reports that, between 1 January 2005 to 31 December 2010, Group Internal Audit conducted a total of five reviews of “the STIR business and STIR trading activities.  None of these,” the notice observes, “considered the LIBOR . . . submission process.”

Reporting the story, the Guardian noted:

Five internal audits had failed to uncover the attempts to manipulate LIBOR.

As the Financial Times states:

The misbehaviour spanned three continents and was widely discussed on group emails and in internal chat forums, the FSA said. The compliance department failed to pick it up, despite making five audits of this part of the bank during the period.

However, as the Financial Times article notes:

The FSA [notice] says at least 45 traders, managers and senior managers were involved in, or aware of, the attempts and that investigators found at least 2,000 requests for improper submissions.

These reports are problematic for the assumption of effective operation of ‘three lines of defence,’ the assumption that internal audit forms a third line of defence against operational risk, non-compliance or malpractice following risk control (the second line) and business units’ own control and oversight (the first line).  They raise important questions:

(a) How could a well-resourced and professional group, as UBS’ internal audit function clearly is and was, miss such widespread violations? and

(b) what are the implications for other internal audit functions in financial services and elsewhere of these omissions?

The rather obvious and pedestrian answer is that, despite the WSJ attention in 2008 and the regulatory interest in 2009, UBS’ Group Internal Audit was not looking for such violations; LIBOR was not 'on its risk radar’.  And, despite extensive work on the relevant business unit, the short-term interest rate desk, they did not happen across the violations in the course of their routine testing work.

From an audit perspective, this represents serial failures in two key areas: audit planning and audit execution.

Planning first.  The UBS internal audit function was not looking broadly enough at sources of risk to identify the media and subsequent regulatory concerns.  The senior audit personnel planning the audit programme were not asking the right questions of the right people in the right way.  This represents a failure both of coordination between compliance, risk and audit functions and a failure to monitor and respond to emerging risks.

Secondly, executing the audits on the short-term interest rate (STIR) desk, the auditors were not asking the right people the right questions in the right way.  Had they been, they would probably have identified concerns about manipulation of LIBOR among the 45 people involved (of whom only a few were in STIR itself).  This represents a shortcoming in audit method and approach.

It is relatively safe to assume that UBS’ Group Internal Audit function was and is staffed by professionals who are both technically capable and experienced at what they do.  The shortcomings probably represent less a failure of those people to do properly what they understood to be their jobs than a failure to define properly what their jobs were.  Specifically, most internal auditors (and I say this as a former IIA national chapter president) are determinedly task-focused.  There are strong emphases on technical audit tasks and strong pressures for reliable execution of the audit plan; repeatability is important for evidence and auditing is, or should be, predominantly evidence-based and heavily data-focused.

However, task completion is not the ultimate objective of audit; that is provision of assurance over the effectiveness and efficiency of internal control to the audit committee of the board of directors and to senior executives; technical audit tasks are merely a means to an end.

When recruiting for a professional internal audit function, there is a heavy emphasis on ‘hard skills’ – technical skills in accounting or finance or engineering or IT or project management; the emphasis depends on the control focus of the role.  The behavioural and interpersonal skills required of auditors to do their jobs effectively – often erroneously referred to as ‘soft skills’ – are less emphasized; they are also less actively developed by training programmes for auditors.

The relative emphasis is unsurprising.  The IIA performance standards refer to information required to support audit findings:

Internal auditors must identify sufficient, reliable, relevant, and useful information to achieve the engagement's objectives. Sufficient information is factual, adequate, and convincing so that a prudent, informed person would reach the same conclusions as the auditor.

Information as opinion gathered from interviews is not much prized: the global internal audit standards do not even refer to interviewing or interview skills (however standards of IIA Australia do and others may).

The reality is that the most valuable information and insight an auditor can obtain will often come through interviews. It is from interviews that we can really learn the nature of how the world works from the experience and perspective of the interview subject, rather than as it was designed or originally implemented; we can learn about the vital work-arounds and sources of uncertainty in a process.  Interviews are rich sources of information about reality.

Interviewing is an art that must be learned and it involves taking risks.  As US sociologist Joseph Hermanowicz notes:

Great interviewing is deceptively difficult, partly because it is an acquired ability that takes time to develop, partly because people often remain bound to conventional norms of behaviour while interviewing that precludes open access to the people interviewed.

Perhaps the clearest problem most technically-minded auditors face in interviewing is the nature of the experience they are attempting to access through interview.  Because they are focused on factual evidence, many auditors’ attention in interviews will focus on establishing facts.  Yet facts are not the stuff of great interviewing or of revelation.  At a deeper level of experience is the interview subject’s cognitive response to the facts, as they see them: that is, what they think about the facts.  Deeper still is the interview subject’s affective or emotional response to the facts and their cognition thereof: that is, how they feel about their version of the facts.  Seeking to understand that can unlock a more powerful and a richer dialogue between interviewer and interviewed.  It is through such dialogue that messy truths about process manipulation – LIBOR or otherwise – will emerge.

Thus, it is through engaging with interview subjects affectively that interviewing offers the greatest benefits.  This is where the richest information comes from.  Yet it takes skill to access and affective interviewing does not come naturally to left-brain-dominated auditors.

In a recent poll of a major financial services client with an experienced and competent audit team before a course on interviewing, fewer than 15% of the team reported that they considered interviewing a real strength; staggeringly, none reported that they got really rich information from interviewing and only a quarter reported ‘loving’ interviewing.  The ability to establish rapport with an interview subject is vital to gaining rich information; not loving it will reduce the interviewer’s predisposition to take risks – ask difficult questions well – and will block off the richest (and most efficient) source of insight available to the auditor.

Had the professionals in UBS’ Group Internal Audit function been more adept at interviewing – at engaging affectively with interview subjects – they may have relied more heavily on that technique for informing their engagements.  In doing so, they would have stood a far greater chance of unearthing the problems in the STIR desk and LIBOR submission process earlier and dealing with them more effectively, thus avoiding a large proportion of the $1.5 billion levied today in fines.  That is a lot of motivation.

. . . . . . . . . . . . .

Paradigm Risk will run a course on Enhanced Interview Skills in London on Thursday, 9 April 2013, priced at £725 + VAT (pay online £575 + VAT).  The course is focused on risk managers, compliance officers and internal auditors across industries.  To view, book and online, click here.

The Enhanced Interview Skills is part of a series of seminars on changes in risk and risk management (see here).  Other courses include:

Culture & risk culture • 17 April 2013 • £725 + VAT (pay online £575 + VAT)

Strategy, risk & uncertainty • 23 April 2013 • £725 + VAT (pay online £575 + VAT)

Risk in programmes & projects • 30 April 2013 • £725 + VAT (pay online £575 + VAT)

ERM: what's changed? [2 days] • 14 & 15 May 2013 • £1,125 + VAT (pay online £825 + VAT)

Risk appetite in corporate business • 22 May 2013 • £725 + VAT (pay online £575 + VAT)

‘Re-rethinking’ the relationship between risk management and regulatory systems

palace of westminster bw

The current context of regulation Who’d be a regulator today?  As more and more regulatory initiatives run in to trouble, it is harder than ever to get agreement domestically, let alone internationally, on what regulation in any sector should prescribe or proscribe and how it should operate.

In the UK, the findings of the inquiry by Lord Leveson in to the culture, practices and ethics of the press in the wake of scandals around phone hacking have resulted in a furious round of negotiations during which the Prime Minister has given major newspaper editors an ultimatum: regulate yourselves or we will regulate you. The latter option, which has been referred to tautologically as ‘statutory regulation’ (all regulation that is not self-imposed – in which case it is not regulation – is based on statutory authority), has evoked howls of outrage concerned with the end of freedom of expression.

In Europe, talks have stalled on creating a single European supervisor for the region’s 6,000 banks; creation of such a supervisor is seen as a key part of a plan to address the eurozone debt crisis; yet it is the German finance minister, Wolfgang Schaeuble, who has expressed reservations most persistently.

In the UK, the appointment of Mark Carney as the next Governor of the Bank of England has led to renewed calls, this time from a former member of the Bank’s Monetary Policy Committee, US economist Adam Posen, that the restructured position of Governor, overseeing both monetary policy and financial stability and financial regulation and supervision, creates a concentration risk.  The Telegraph reports Posen as saying:

It will take a great deal of wisdom and restraint on Carney's part to not let that lead to over-reach by one individual.

On the world stage, the international regulatory body that Dr Carney chairs, the Financial Stability Board, delivered in June 2012 its assessment of progress against the goals established for the FSB when it was established at the London G20 summit in April 2009.  The conclusion: lots of expressions of ‘progress’; not many real results were documented.  Crucially, real progress on new ways of understanding risk propagation in the financial network, transmission and systemic interconnectedness – what the main BIS paper on the topic refers to as:

methodological progress and modelling advancements aimed at improving financial stability monitoring and the identification of systemic risk potential

– has been limited.  The BIS paper concludes:

work should be conducted that incorporates contagion effects in funding markets.

I thought that was one of the key reasons FSF was re-constituted as the FSB. The screaming need for research on the complex adaptive systems of financial markets was clearly identified in 2009-10 by multiple reports on the crisis, not least in our review of supervisory requirements for systemic risk (here) and the excoriating attack by the Dahlem Group on the failures of conventional general equilibrium macro-economics to model financial stability.

In a commentary published this week in the UK, accountants Grant Thornton report on insurers’ views on the progress towards implementation of the comprehensive European insurance regulation, Solvency II.  Grant Thornton finds:

frustration within the insurance industry over the introduction of Solvency II is at an all-time high . . . although 99% of respondents thought that the principles behind the new regime were good, 82% felt that those principles had been ruined by the complexity of the implementation.

Oh dear.  This matters, first, because we all pay the cost of such initiatives through insurance premiums and secondly, because, unlike its banking predecessor, Solvency II is both a rationalist and objectivist management framework applied to the sector it regulates.  The Grant Thornton commentary finds that

Almost half of actuaries and risk professionals consider Solvency II to be a box ticking exercise, while 60% of actuaries and 20% of risk professionals consider Solvency II as more red tape from Brussels . . . The implementation process, the constant delays, the complexity of the regime and the quantity of man hours that have been expended preparing for Solvency II, have all resulted in a loss of the market’s hearts and minds.

This represents good regulation being lost in translation to regulatory interpretation, supervisory expectations and firms’ practical requirements.

In the UK again, a joint committee of both houses of Parliament is investigating banking standards, focusing on professional standards and culture and lessons to be learned about governance, transparency and conflicts of interest; they are to make “recommendations on legislative and other action” by 18 December 2012.  Any media coverage to date has made the Committee appear more intent on extracting embarrassing, monosyllabic confessions than understanding the complexities of operation of the financial services markets.

In the US, despite several well-researched and highly critical submissions (including ours), COSO has been redrafted with minimal change to the previous draft text issued in December 2011.  This is a lost opportunity of gargantuan proportions.  The regulatory role it now fulfils under §404 of the Sarbanes Oxley Act affords COSO a global reach across the world’s largest businesses in the vital area of internal control.  Yet, while at publication COSO offered new perspectives and genuine insight, in its regulatory application and practical implementation it has been shown to be a seriously flawed construction with unintended consequences that may dwarf any benefit it delivers through improved diligence in financial reporting.

Finally, in the UK, at a conference I chaired in November, the chair of the FRC Codes and Standards Committee, Jim Sutcliffe, stated that FRC would shortly publish a draft of up-dated guidance for directors on risk and internal control (ie. Turnbull).  He indicated that no major changes were planned.  Again, a significant lost opportunity.

Rethinking risk management all over again (with apologies to Yogi Berra)

In a seminal paper in 1996 titled ‘Rethinking risk management’, US academic René Stulz proposed a goal for corporate risk management:

Primary goal of risk management is to eliminate the probability of costly lower-tail outcomes – those that would cause financial distress or make a company unable to carry out its investment strategy . . . while preserving a company’s ability to exploit any comparative advantage in risk-bearing it may have.

René went further, stating

Once a firm has decided that it has a comparative advantage in certain financial risks [“through its financial instruments and liability structure as well as its normal operations”], it must then determine the role of risk management in exploiting this advantage . . . Risk management may, paradoxically, enable the firm to take more of these risks that it would [otherwise].

In a paper a decade later discussing enterprise risk management or ERM (here), co-written with Brian Nocco, a senior US insurance executive, René repeated the essential message.  In that paper, the authors wrote that

companies should be guided by the principle of comparative advantage in risk-bearing.  A company that has no special ability to forecast market variables has no comparative advantage in bearing the risk associated with those variables. In contrast, the same company should have a comparative advantage in bearing information-intensive, firm-specific business risks because it knows more about these risks than anybody else.

The implications of this – the ‘paradox of risk management’ – were essentially repeated from the earlier paper:

One important benefit of thinking in terms of comparative advantage is to reinforce the message that companies are in business to take strategic and business risks. The recognition that there are no economical ways of transferring risks that are unique to a company’s business operations can serve to underscore the potential value of reducing the firm’s exposure to other, “non-core” risks . . . By reducing non-core exposures, ERM effectively enables companies to take more strategic business risk—and greater advantage of the opportunities in their core business.

Nocco and Stulz differentiated between what they called the macro and micro benefits of ERM.  To gain the micro benefits – improved decisions and aligning businesses’ risk-bearing with the risk interests of the corporate whole – Nocco and Stulz identified two essential disciplines for firms:

  1. The requirement to evaluate the marginal impact of a proposed project in terms of the firm’s portfolio of investments and risks, and
  2. Divisional performance evaluation linked to the cost of capital absorbed by the division to support the risks it is taking – allocating an imputed capital charge at divisional level, reflected in divisional managers’ performance.

The authors conclude that:

With the help of these two mechanisms that are essential to the management of firm-wide risk, a company that implements ERM can transform its culture. Without these means, risk will be accounted for in an ad hoc, subjective way, or ignored.

In this neat and defensible way, the authors defined the elements of a firm’s risk system; they segue from corporate systems of allocation and financial accountability to behavioural impact and change – from structure and analysis to behaviour.  The authors go on to note:

But if ERM is conceptually straightforward, its implementation is challenging.

Rather like regulation.

Time for a renewed regulatory focus on human & system behaviour

Many of the problems experienced in regulatory change in the UK and further afield come from losing sight of these simple but powerful insights about the behaviour of firms and people in firms, or of any complex behavioural system.  Culture change cannot and does not arise instrumentally through pulling ‘culture levers’ or driving ‘culture metrics’, nor does culture change because new policies are issued stating that it should.  It cannot be measured meaningfully among groups, only observed and reported, nor can results be compared quantitatively across groups; the psychometric tools on which individuals’ assessments are based are simply not robust to that form of aggregation.  Doing so is like saying that a psychopath and a sociopath can add up to zero; it just doesn’t mean anything.

Regulators must accept, however reluctantly, that culture emerges spontaneously from the history and multi-layered patterns of interactions between unpredictable human beings.  It cannot be prescribed, measured or ‘changed’; behaviours can change and be changed and a different culture – a different set of cultures – will emerge in a firm as a result.  Far too many regulators seem enthusiastic about meddling with culture which they can neither change nor control and which few appear to understand.  Indeed, the quotes from Grant Thornton appear to suggest that any cultural change resulting thus far from Solvency II – a genuinely objectivist regulatory instrument in the original – is not for the better.

This leaves regulators where they have always been: able to prescribe structure, analytic routines (though not the use of information; merely 'that it be used’) and to influence behavioural routines.  Where these coincide, regulatory prescription and proscription remain feasible, but only within the limits of these core elements of structure, analysis (and reporting thereof) and behaviour.  For example, structure and analysis combine to indicate the allocative and decisional routines of the type described by Nocco and Stulz; from here, they observe, the behaviour results, the culture emerges. All regulatory initiatives must consider these core elements of the system; all too often, behaviour is overlooked and unintended consequences emerge.

Each of these elements exists in a context and faces resulting constraints: structure must follow company law and risk-bearing; analysis requires access to and a means to deliver relevant, accurate, repeatable and comparable data, the competence to manipulate the data and to interpret the results; behaviour requires cognizance of how people actually behave and the discrepancies between espoused and actual behaviours.  And all interact.

Regulation, its interpretation, application in firms, resulting supervision and enforcement all create a complex regulatory system of interpretation and implementation and operation that is an essential part of achieving any regulatory objective.  Regulators who claim that firms just should have implemented better the stated regulatory requirements have only two feasible options: enforcement of pre-defined sanctions or additional regulatory intervention; ‘light-touch regulation’ and ‘self-regulation’ are and always were oxymoron.  They are merely ‘absence of effective supervision and/or enforcement action’ and ‘self-control’ respectively.  Regulations are made by regulatory agencies or other agencies with powers delegated under statute; but they are only as good as their enforcement.  It’s that simple.  Economic and legal scholars such as Gary Becker and Richard Posner have made this point consistently for decades. It should come as no surprise to politicians or regulators.

Regulatory regimes that kill off the patient through excessive ‘box-ticking’ bureaucracy or through managerial fashion – such as near-universal development of technically-flawed and meaningless risk matrices – have only their conceivers to blame.  Regulators must be aware of the commercial and human (including knowledge) contexts in which their regulation is implemented and adjust either their ambitions or their regulation accordingly.  Name-calling and mea culpas in parliamentary select committees after the fact are no substitute for thinking through regulatory systems in advance of implementation and close monitoring of their impact and outcomes relative to the objectives set for them.

Reconciling risk & regulation

Most importantly, firms responding to regulation must keep their commercial objectives, their risk-generation potential, risk-bearing capacity and risk absorption and transmission (ie. through externalized individual or systemic effects of risk) front-of-mind; it is the transmission of risk to other parties that is the focus of regulation.  Both at regulators and in regulated firms, far too many regulatory initiatives are delegated to implementation teams whose task is to implement the regulation as written.  This is inevitably a mistake.  Most regulations should not be implemented as written, even in the febrile atmosphere of a financial crisis or a phone-hacking scandal.  They must be translated in to the workings and operating routines of firms as they operate and, where necessary, that operation should adapt.  They must never be left to a ‘regulatory process’; this simply guarantees loss of boardroom attention, added cost, irrelevance and eventual transgression.

Whether it be a press regulator, a financial regulator, a corporate reporting regulator, a central bank or the FSB, any regulatory regime exists to manage risks, be they perceived or actual; manifest or imaginable.  A fuller and more honest appreciation of those risks and of the corporate routines for dealing with them is essential both to effective regulation and effective response to regulation.  Regulators and firms alike must consider the interplay of the risk and regulatory systems – the market and behavioural contexts within which risks arise and parties are exposed to the risks, how risks are recognized institutionally and are managed or transferred, how limit breaches and other violations are detected and punished, and the systems through which directors and executives receive assurance over the processes and outcomes.

Surely, the devil is in the detail, but someone must retain the ‘30,000 ft view’. Hence, Nocco’s and Stulz’s insight about the effect of marginal risk decisions (qua regulation) on the portfolio is essential – regulators must consider each new regulatory step in light of its combined effect with the existing body of regulation, implementation, supervision and enforcement. Regulators must address their risk at the system level.  The greater the energy absorbed in consideration of detail, the less is available for consideration of system operation and impact; attention to the macro and micro must be balanced.

The history of the last decade has been one of failed regulatory initiatives and/or oversight across a wide range of settings.  Regulation has proceeded with limited reference to regulated firms’ operating and risk systems or to the behaviour of firms and sub-firm ‘actors’; emphasis has been on application and process rather than outcome; sanctions have been almost non-existent. We now observe a crowded regulatory agenda and new-found enthusiasm for intervention as well as growing frustration among the regulated with bureaucratic implementation.  If regulators and firms do not refocus their efforts to the system level to concentrate sensibly and realistically on how regulation should operate in practice alongside and within the commercial risk-bearing systems of regulated firms – regardless of context – the next decade is likely to be no better.

Addendum

We address many of the issues identified in this blog in a series of seminars on risk and risk-taking in January & February 2013.  For details, see here.

The right man for an almost impossible job?

Mark Carney

In recent years, in Masters’ lectures I deliver each year on governance and risk management, I have prescribed as mandatory reading the text of a speech by Mark Carney, then and now Governor of the Bank of Canada and now Governor-designate of the Bank of England.  It was a speech he delivered at the Economic Club of Canada in Toronto in December 2010 titled ‘Living with low for long’.  In the speech, Dr Carney focuses

on the factors that have led to a low-interest-rate environment in major advanced economies, and the implications of this environment for financial stability and economic growth.

He begins the speech by pointing out, presciently, that

current turbulence in Europe is a reminder that the crisis is not over, but has merely entered a new phase. In a world awash with debt, repairing the balance sheets of banks, households and countries will take years. As a consequence, the pace, pattern and variability of global economic growth is changing . . .

When I first read the speech, a couple of days after it was delivered in December 2010, I was struck by both the clarity of expression and of vision.  He was as comfortable referring to recent monetary aggregates as he was the sweep of history from the 1930s to today.  It was the speech of an assured as well as an intellectually curious man.

What struck me most and still strikes me as I re-read the speech, was Carney’s analysis of the danger of economic hubris.  The implications of sustained periods of low interest rates can be material:

As we have all just been reminded at great cost, an extended period of stability breeds complacency among financial market participants as risk-taking adapts to the perceived new equilibrium. Indeed, risk appears to be at its greatest when measures of it are at their lowest.

He went on to explain the effects of this, as follows:

Low variability of inflation and output (reduces current financial value at risk and) encourages greater risk-taking (on a forward value-at-risk basis). Investors stretch from liquid to less-liquid markets and large asset-liability mismatches are stretched across credit and currency markets. These dynamics helped compress spreads and boost asset prices in the run-up to the crisis. They also made financial institutions increasingly vulnerable to a sudden reduction in both market and funding liquidity.

This is an extremely neat analysis of the problems of the pre-crisis period and their subsequent manifestation.  It also signals the tremendous difficulties in managing effectively financial stability and systemic risk on which we have written extensively (see here).  Dr Carney sees his role as chairman of the Basel-based Financial Stability Board, replacing Mario Draghi just over a year ago, as fundamental to reforming the global financial system to improve economic efficiency.  In the vernacular of my homeland, ‘too right, mate!’

The other aspect of Carney’s 2010 speech that was unusual and noteworthy was his appeal to learn the lessons of history:

the question remains whether there will still be cases where, in order to best achieve long-run price stability, monetary policy should play a supporting role by taking pre-emptive actions against building financial imbalances. As part of our research for the renewal of the inflation-control agreement, the Bank is examining this issue. While the bar for further changes remains high, the Bank has the responsibility to draw the appropriate lessons from the experience of others who, in an environment of price stability, reaped financial disaster.

Despite Canada having weathered the storm of the financial crisis better than any other OECD economy, Dr Carney’s speech contained not as much as a hint of triumphalism.  After noting the benefits of low public debt levels in Canada, Dr Carney ended the speech sounding a note of caution:

Cheap money is not a long-term growth strategy. Monetary policy will continue to be set to achieve the inflation target. Our institutions should not be lulled into a false sense of security by current low rates. Households need to be prudent in their borrowing, recognising that over the life of a mortgage, interest rates will often be much higher . . . We must improve our competitiveness. Recovery after a recession demands that capital and labour be reallocated . . . Now is not the time for complacency.

No doubt, he will bring this same cautious and realistic vision to the problems facing the Bank of England in its management of the monetary settings in the British economy.  He will need to bring his extensive managerial and banking experience relevant to the Bank of England’s expanded role as regulator and supervisor of financial institutions.

His chairmanship of the FSB is an added fillip to the chances of coordinated reform across leading financial markets in the area of systemic risk and will boost the profile of that function at the Bank of England.  And it sorely needs boosting.  Hopefully, his attention within BoE will encourage a more ambitious response than has been evidenced to date from the Bank in this area; it is more than capable of providing it but has, to date, lacked the support of its leadership to offer intellectual direction globally on this issue.  Such direction is sorely needed.

The role of Governor of the Bank of England is a massive job.  Although hindsight has shown that not every decision he has made has been the right one, Sir Mervyn King has performed the role with grace under considerable pressure.  But the role Dr Carney will take on will be greatly expanded from Sir Mervyn's current job, and some consider it too large a role for a single office (notably Anthony Hilton in his Evening Standard column on 18 November 2012, not available online); Anthony described the BoE Governor role as

expanded out of all recognition in recent months and (sic) carries with it an almost impossible weight of public expectation.

Few could manage this expanded role and the scrutiny and criticism that will come with it.  Perhaps Dr Carney is one.  I certainly hope so, for all our sakes, and wish him luck for when he takes the role next June.

An agenda for improving corporate risk management

Man on fire

In the course of preparing a series of seminars we will be delivering in London this winter, we have focused on what an agenda or ‘manifesto’ for improving corporate risk performance would look like.  What should the firm do practically to improve its management of risk and uncertainty? The agenda has five items.

1. Better focus & insight 

Focus in risk management needs to start at the strategic level rather than where it usually starts presently: in the operational bowels of a firm. At the strategic level, understanding risk means understanding the potential effects of assumptions about an uncertain market and competitive environment on the viability of the firm’s business model.

The focus of risk management should be to improve analysis of the potential impacts of uncertainty on the business model – to address known risks – and to bring attention to risks of which the firm is not presently aware: to improve anticipation of emerging trends and risks, improve detection and increase the firm’s resilience against these risks.

Understanding the parameters of the firm’s risk-taking and risk-holding capacity are vital and should routinely be compared to the firm’s changing risk position over time. Before considering any qualitative tolerances or compliance issues, the firm should understand its risk capacity and risk tolerance in quantitative, financial terms.  The board’s relative preferences for how close it should operate to those tolerances and the price it will pay to reduce risk – through avoiding risk, developing operating flexibility or transferring risk contractually – represent its risk appetite.

All risk is financial (except as it relates to threats to physical safety, which also has a financial impact).  Risk cannot and should not be separated (à la COSO) in to operational, financial and compliance issues; this just confuses things.

2. A greater emphasis on effectiveness

The starting point should be to examine the accuracy and reliability of the firm’s historic planning and project forecasting relative to what has actually occurred: how accurate were the firm’s business and financial plans and project plans?  Focusing on the error parameters in forecasting will tell the firm a lot about how much credence to place in the next forecast. How reliably a firm can understand and describe its expected future over relevant planning horizons and how well it prepares for and accommodates the unexpected defines the performance of its risk management system.

Firms should drop the use of pointless risk scoring.  If it is sufficiently unimportant that a score of 1 to 5 will suffice, it is not worth doing.  These provide no meaningful information and inhibit reflection about cause and effect – the most useful risk thinking of all in a firm.  Firms should eliminate risk matrices on the same basis (they are technically fundamentally flawed representations of risk anyway) and rename risk registers for what they are: ‘known risk control registers’.  They have their place; it is not at board tables.

3. Organisational reach

In order to understand risk at the firm level, the firm must adopt an ‘enterprise’ view.  This implies the ability to integrate analysis of risks through an understanding of inter-dependencies and correlations.  Of course, as the financial crisis demonstrated, such correlations are unstable.  Either way, that requires developing an integrated view of risk in the firm by risk type and across the firm, dependencies and transmission effects between risks.

For risk management to mature, firms must get considerably more ambitious about setting limits for risk analytically across the firm based on probabilistic measure of risk such as cashflow-at-risk and earnings-at-risk.  Wherever possible, these should be built in to executives’ accountabilities and performance assessments.

4. Behavioural realism

We need a far greater realism about the behavioural role of the board of directors.  Executives drive behaviour in a business; after all, they are in the business.  The role of directors is to ensure that executives recognize the potential impact of their actions and behaviours on the people working for them.

Firms must re-evaluate their corporate policies in light of revealed behaviours – they must assess objectively and understand the differences between expressed behaviours, modeled behaviours and revealed tolerances in terms of actual management practice in the firm.  They should re-evaluate sanctions in terms of policies and application of sanction regimes as they are applied rather than as they are espoused.  Nothing is more corrosive than appealing to policies that are routinely and visibly violated without sanction.

5. Improved operability

Many firms need a greatly expanded focus on data and risk analysis necessary to support decision-making.  Firms should acquire (internally or externally) or procure the data necessary to support understanding of the parameters of risk and uncertainty.  This includes what has gone wrong within the firm and outside the firm.

Analysis of risk can and must be linked to the firm’s forecasting and planning systems.  That will provide the base for building a limits system that works in the firm and which is applied consistently and robustly across the firm.  Linking tolerances to scenarios, stress testing and variance in performance versus plan provides a robust way of holding executive accountable for their management of risk; nothing less can sustainably be effective.  Whether in a financial institution or a non-financial corporate firm, limits, scenarios and stress tests should be linked to capital allocation and to tolerances around risk to capital.  The firm should charge business units for the use of at-risk capital as an essential performance discipline and as an indicator of executive performance.

Summary

Risk management is not an exercise to be conducted occasionally to provide assurance; it is a vital and on-going activity central to the health and sustained performance of a firm.  It should not be reduced to workshops (very seldom useful) and a tick-box effort.  Firms corporately and executives individually should determine which decisions require risk-based analysis, typically quantitatively, probably stochastically – if you don’t analyse it, it is a low-level management control.

Many firms spend far too little time understanding the linkage between strategy, uncertainty and risk. Most firm failures result from strategic errors or flawed strategic assumptions.  To be effective, risk management must address that problem.

In risk terms, many firms are ‘data deserts’; quantitative analysis of business risks is regarded as ‘too hard’ or not practicable.  Until firms move beyond this aversion and concentrate on the role of risk in corporate structure and accountabilities, systematic analysis of risk exposures and dependencies, developing resilience against the range of plausible risk scenarios (or make a conscious decision not to) and understand better the linkage between observed behaviour and risk, risk management will remain a peripheral exercise; it will remain a tick-box distraction.  We cannot afford to be so cavalier with other peoples’ money.

To learn more about the series of seminars in London between December 2012 and February 2013, visit

www.paradigmrisk.com/forthcoming-courses

The truth about Neil Armstrong, Barclays, LIBOR, risk & culture

800px-Trinity_Test_Fireball_16ms

To go directly to the commentary paper, 'Regulation, risk & culture: will we never learn?', click here.

The report of the parliamentary Treasury Committee on LIBOR appeared a week after the death of American astronaut, Neil Armstrong.  The two events are strangely linked by their relevance to culture – but separated by a yawning gap in the quality of thinking brought to bear on the topic.

When Armstrong stepped on to the surface of the moon, he claimed it was “a giant leap for mankind.”  But the flag he planted was American.  Just after Neil Armstrong’s death in late August, President Obama said he “was among the greatest of American heroes – not just of his time, but of all time.”  Armstrong’s was a remarkable accomplishment, but it was not Armstrong’s alone; it belonged also to Dr Edwin “Buzz” Aldrin as well as command module pilot Michael Collins and to a generation of NASA astronauts, ground crew, engineers, physicists, technicians and other personnel. It was not one man or two men or three; it took many men and women many years and vast resources to make Neil Armstrong one of “the greatest American heroes”.  Those resources were applied not only for the good of science but also as a crucial step in the Cold War – in a space race that had emerged from the arms race that succeeded the Trinity test (pictured) and subsequent use of the atomic bomb by the Americans against two Japanese cities at the end of WWII. Armstrong's accomplishment occurred within a definite context.

Since the success of Apollo 11, the American space programme has had its share of failures and disasters.  In the quarter of a century since the first space shuttle disaster, its performance and failures have been analysed in detail from a range of disciplinary perspectives.  One that has emerged strongly is the role of culture.   Context is a critical element of risk-taking and culture.

The report of the Treasury Select Committee uses the word ‘culture’ a staggering 50 times; it was used 96 times in the record of Oral Evidence.  There was plenty of confusion in the way it was used by those giving evidence and in the Committee’s report.  Culture has appeared frequently in previous analyses of failure in different contexts, but few have been examined in the detail that has attended the losses of the space shuttles Challenger in 1986 and Columbia in 2003 (Armstrong was a member of the Rogers Commission examining the loss of Challenger).  When it comes to financial servies regulation, can we and will we learn the lessons of history?

Based on the evidence from the recent parliamentary Treasury Committee report on the LIBOR scandal and other industry bodies’ and regulators’ reports internationally, the answer would appear to be a resounding ‘No’.  There is little evidence that regulators or industry bodies or market participants have understood the nature of the regulatory challenge or have understood how to respond the calls for improving culture and risk culture.  Without a change of focus, considerably greater application of behavioural insight and a strong measure of analytic caution, behavioural regulation of financial services will not move forward in the way that Parliament, regulators, the media, executives, shareholders or consumers expect or intend.

This pessimistic conclusion argues for the need to address risk, behavioural control and culture from first principles in order to reach a regulatory ‘settlement’ that will improve compliance and the management of risk rather than simply adding cost and confusion.  If we are to avoid regulating in haste and repenting at leisure, there is much that financial regulators can and must learn from other sectors’ failures about the importance of organisations’ history and context, of their technical, control and administrative cultures and compliance and the limits of rule-making.

Financial services firms face a choice:

  • wait for the regulator and/or supervisor to act, however blunderingly, to mandate new forms of regulatory 'cultural imperialism' and live with the consequences; and/or
  • adopt whatever superficial but fashionable solutions emerge from the consulting sector; or,
  • invest in greater understanding of the utility and limits of cultural descriptions of organisational activity and the potential prescriptions that emerge from them.

Whether recharting the behavioural course of banks, responding to the rigours of Solvency II in insurers or refocusing asset managers in a lower-yield world, we believe the latter course is an urgent imperative facing all firms in the financial sector.

We have recently published our Autumn riskbriefing which examines the relationship between behaviour and culture in a context of the social, market and internal pressures facing an organization.  Seeking lessons to learn, we examine first the accomplishments of the US space programme, its subsequent failures and some of the conclusions from the analysis of those failures.  We place the US space programme in its context, which has changed materially since its inception.  Then we review the recent parliamentary report on manipulation of LIBOR and the mis-characterisation by the Treasury Committee, by regulators, supervisors, industry bodies and market participants of culture and risk culture.  So far, the portents are ominous.  Lessons have not been learned; they do not even appear to have been sought. Realism and humility are both in short supply.

You can access the riskbriefing paper Regulation, risk & culture: will we never learn? here.

You can also access a two-page summary of the paper here.

Banks 20 years behind in risk systems? Regulator, heal thyself.

Royal-McBee LGP-30 computer

In a recent (29 July) article in the FT titled “Banks 20 years behind in risk management”, the author cited a survey by Corven, a consultancy, that indicated that “the largest banks and insurers are at least two decades behind their peers in the aviation industry in managing risk.” The article continued:

“Respondents described 62 per cent of “major risk incidents” as attributable to culture, leadership or behaviour but 91 per cent reported that the response to such incidents had been to change processes and systems. Meanwhile, 93 per cent of financial institutions have no way of measuring culture or behaviour, according to the survey.”

The article itself, as much as the research, is revealing.

First, the title.  In a press release the following day discussing the research, Corven actually states:

“Emerging themes suggest that the largest banks and insurers are many years behind their peers in the oil and gas and aviation industry in measuring and changing behaviours in response to major operational risk.” (emphasis added)

The sub-editors at the FT clearly believed “twenty years” and “two decades” would grab the reader more effectively than the rather anodyne “many years”, and, of course, they would be right.  Also, wisely, the reference to the oil & gas industry was omitted from the article.

It is particularly telling that a survey of only “25 senior risk officers” should attract the attention of the FT at all.  The number is very small and, from it, few reliable inferences can be drawn.  The statement in the FT article that “four percent claimed they were proactive” could be rewritten “one respondent claimed his or her institution was proactive.”

Turning to the findings, “62 percent of ‘major risk incidents’ attributable to culture, leadership or behaviour” means people, pure and simple (and especially in the BIS operational risk categorizations).   That 91 percent of respondents changed “processes or systems” is unsurprising; the alternative would be to change people.

However, the truly staggering bit comes next.  To repeat “meanwhile, 93 per cent of financial institutions have no way of measuring culture or behaviour, according to the survey.”  The truly interesting piece here is the other 7 percent (although how many organizations is 7 percent of 25? 1.75, by my calculation) believe they have a way of 'measuring culture or behaviour'.  Given that there is no meaningful way of measuring risk culture (and never will be) and that risk behaviour seldom reduces to objective metrics, it is the understanding of the 7 percent that must be called in to question.

The expectation that complex, emergent human phenomena are measurable and, that if they were, the metrics produced would be objective and operable is a comforting delusion but a delusion nonetheless.  But it seems to sell newspapers (and ameliorate regulators).

Of course, the real story is that most banks really are well behind the curve in terms of both operating and risk and control systems.  Our research on systemic risk published in 2010 (available here) highlighted this problem clearly.  In some areas, ‘20 years behind present functionality’ may not be an exaggeration.  To the institutions themselves must be added central bankers and regulators (payments processors and exchanges are considerably more advanced).

There is no doubt that banks' and insurers’ systems investment programmes have lagged the exigencies of their business activity resulting in excessively complex systems architectures, abundant reliance on ‘legacy systems’ and contorted interfaces between systems that are as unreliable as they are unnecessary.  The root cause of these problems, however, has not been lack of expenditure; it has been spending on the wrong things.  The problem is endemic.  The material question is why?

The answer is as unpopular as it is controversial.  At a CSFI event at which I outlined our research and our findings on systemic risk, I was rounded upon by a well-known city figure (among others) for daring to suggest that the answer was right in front of us (or would be if we were standing on the south side of North Colonnade in Canary Wharf, just to the east of the DLR track).

Of course, it is not merely the Financial Services Authority that is to blame; in many ways, they have simply been a conduit.  The problem lies in the artifice of the potential efficacy of on-going (dare I say ‘perpetual’?) regulatory change driven by zealous legislators and regulators in Brussels and Whitehall; Brussels, mostly.  A similar, though uncoordinated flow has been evident in Washington and in other financial capitals.

The perpetual torrent of regulation has been a reaction to the on-going string of financial crises and institutional failures that have bedeviled the industry (and always will).  The belief has been that these failures have been the result of insufficient regulation.  The implication is that more regulation is better and will fix the problem(s).   This profound assumption has proceeded virtually unchallenged except by industry bodies and the institutions themselves, who are (rightly) presumed to be self-interested.

The effect of the perpetual torrent of regulation has been to deny financial institutions the opportunity to plan over a reasonable horizon their systems strategies and to execute those strategies uninterrupted.  Instead, institutions have had constantly to adjust and revise their systems development and replacement plans to accommodate the changes necessary to permit compliance.  This has been especially true in the all-important area of risk management.

This problem has been compounded by constant merger activity and the notorious challenges of integration systems post-merger.  Ironically, it is often mis-handled systems rationalisation that ultimately nullifies the presumed benefits of a planned merger.

For those firms operating across borders, there has been the additional problem of multiple flavours of each regulatory initiative.  While, in some instances, this offers these firms the potential for regulatory arbitrage, it complicates the systems picture and results in patches in each country to achieve the specifics of that jurisdiction’s compliance requirements.  For internationally operating firms (ie. big ones), these divergent regulatory approaches limit the benefits of a unified systems approach and substantially complicate core system replacement.

Another pervasive problem is excessive prescription of control activities in countries’ regulatory approaches.  Rather than focusing on what firms must achieve, regulations also frequently prescribe how.  The degree of detail prescribed by regulators around the world has two mutually-reinforcing effects: (i) it makes system design and configuration a potentially limiting factor for compliance and (ii) crowds out firms’ own initiative to develop and implement efficient regulatory responses.  Both of these militate against intelligent systems design as a predictable source of competitive advantage.  This, in turn, reduces firms' appetite for and investment in systems replacement reinforcing the need to divert resources to maintain the compliance currency of legacy systems.  The result is predictable.

However, at indsutry level, the biggest problem, all-but-ignored by regulators, is inconsistent data standards and unreliable data provenance in firms’ securities and customer data.  The absence of consistent global standards for securities and customer data bedevils all attempts to improve data quality and, as a result, the quality and reliability of firms’ reporting to regulators.  It makes forming a consistent and reliable view of systemic risk functionally impossible.  Regulatory fiat, a la Solvency II, will not solve the problem (although it may force the industry to put in place work-arounds, thus creating more complexity).

At the regulatory level, national and global regulators have been delinquent in developing and defining a standardized reporting structure for risk exposures across classes.  The absence of coherent industry risk data architectures holds back both regulatory efficacy and firms’ investment in their core information technologies.

Driven by uncertainty over regulatory direction and initiatives, the result has been a systematic and industry-wide predilection for IT spend to focus on short-term, compliance-driven upgrades to existing systems rather than longer-term strategic systems replacement and enhancement.  Firms have then had to stomach criticism from regulators for the very systems management practices their perpetual torrent of regulatory change has fostered.

Recent IT failures at institutions in the UK and further afield are, beyond doubt, the responsibilities of the firms themselves.  But European and national regulators need to be both far more conscious of their impact on firms’ systems investment decisions and of the utility of defining the data architecture they need to perform effective supervision at firm and system level.

Excessive regulatory prescription is strangling firms’ initiative and turning them in to compliance factories.   The result weakens rather than strengthens risk management and encourages firms to retain and patch dated systems.  Perhaps a regulatory rethink is in order.

A voyage in oxymoron: a case study in ERM system selection

flickr wire barb

ERM is a broad church. Currently, it means different things to different people, depending on experience and discipline.  How far can the term be pushed before it loses meaning? In a recent chat thread, a US central government agency’s head of risk appealed for “an ERM system evaluation checklist” to be used “to compare features and functionality”.  This is certainly not the first time I have been asked such a question (although, in this instance, I was asked along with 3,992 other members of the LinkedIn group).

There are plenty of these database-driven systems on the market; all do roughly the same job. The principal differentiators are (i) look and feel, (ii) ease of integration of quantitative analysis (do you want to be taken seriously?) and (iii) flexibility of reporting.

Selecting one of these systems under explicit instruction is one thing; under these circumstances it is a compliance solution. Doing it as a conscious and deliberate choice to further enterprise risk management is quite another.  No-one should be under any illusions: that a compliance solution is precisely what you will achieve, despite the best will in the world.  Simply put, there is no such thing as an 'ERM system' in the technology sense. There are plenty of software vendors who, for perfectly valid commercial reasons, take it upon themselves to badge their risk/compliance databases as such. 

And there are plenty of well-intentioned if unsuspecting buyers.  But to implement an effective ERM system, start somewhere else. Compliance is fine but it is not ERM and never will be.

In the case of the US agency, the risk director reported having undertaken an extensive study and the organisation had developed a 5 year roadmap for risk, no doubt full of aspirational statements.  The agency had probably paid external advisors to assist them with this and, as a result, there will be considerable expectation among their senior executives that risk will be better managed and control will be more effective as a result of the implementation and following the roadmap. If only they can select the right tool for the job!

Yet, here was the risk director of the agency, in a LinkedIn chat thread, trying to elicit the best way to select what they believe to be the most important enabler of the roadmap they have defined.  It strikes me that he was probably experiencing the first, dawning moments of realisation that the articifice the agency has assembled can only lead, ultimately, to disappointment. Like so many others before them, they will have experienced a few "but can it work, really?" moments. Sadly, the answer, provided by innumerable examples of the experience of others is: "No, it cannot and it will not." Or, more accurately, yes, it will work right up to the moment that it doesn't. And when it doesn't, it will be spectacular.

Simply put, attempting a simplistic solution to problems of irreducible uncertainty and complexity is ultimately ineffective and thus (almost) pointless. No risk database, no matter how well specified and developed, can provide the perspectives, mind-sets, information architecture, capabilities and competencies required in an organisation to address its risk challenges. Risk databases are the right answer to the wrong question.

If the objective is to implement a tool to collate all the risks that an organisation already knows about, then a risk database is the right place to start.  Of course, this raises the question of why one would ever want to do such a thing.  However, if the objective is to address risk and uncertainty and their implications for structure and metastructure of control in the organisation and to develop approaches and techniques that assist executives to manage the risks within their areas of authority, the starting point is very different.  It will not be what people already know that will enhance anticipation of and resilience to risk; it will be on what they do not know and how to deal with the lack of knowledge, with the uncertainty or with the lack of understanding of complex and ambiguous operating conditions or the results thereof.

If an approach based around populating a risk database is chosen, senior people will push back – as the risk director reports that they have in the US agency case – because they see another compliance activity heading their way that will do nothing to help them with their rather confusing day jobs. They see a whole load of workshops designed specifically to elicit from them what they already know – using their watch to tell them the time. Donald Rumsfeld got it right: there are known unknowns and unknown unknowns. Of course, his department's approach to that problem at that time (2002) was scarcely exemplary. Quite the contrary.

The difficulty is that the risk database approach and its attendant risk elicitation workshops do not address the important types of uncertainty or, if they do, they do so tangentially and partially. What we need is an approach that marches head-long in to the bewildering and considerably more useful world of addressing the nature and implications of behaviours, uncertainties and complexities in the organisation’s strategic, operating and control environments. No easy task.

Innumerable organisations are confronting this problem. There also appears to be a growing recognition in government that risk is not being addressed satisfactorily in government agencies by the current orthodoxy. Endless revelations by witnesses appearing before US House and Senate sub-committees and UK parliamentary select committees relating to government agencies’ (and, increasingly, private firms’) crises and failures suggest we have not achieved the insight on risk that the proponents of (what I will call here) the ‘orthodox approach’ to management of risk – workshops, registers, matrices – suggest we ought to have realised.  We need thorough – even forensic – analysis of whether the approaches based on compiling lists of known risks is an effective approach to management of risk; frankly, sustained utility seems improbable.

Instead, we require an approach – or, more realistically, a set of approaches – that recognize the ‘non-linear’ nature of risk.  Much of the time, firms, agencies and other organisations work in a zone of relative stability punctuated by periods or cycles of greater or lesser volatility.  In these periods, the risks that manifest are known about and, to a greater or lesser extent, understood.  Such risks are, or can be, adequately captured using the orthodox approach referred to above.

However, these systems are never ‘stable’ per se.  Occasionally, the unanticipated results of the unpredictable behaviours and interactions of human beings in the system, or a random external shock or revelation of previously unknown conditions in the operating environment (what Taleb calls a ‘black swan’), can shift such systems in to a phase of instability that behaves very differently; a previously unknown (and probably unknowable) ‘tipping point’ has been passed.  What happens hereafter is turbulent and chaotic.  The trajectory is not predictable but examination of previous crises provides discernible patterns.  In this phase, organisational resilience is crucial; interventions can be decisive in managing the crisis or escalate it immeasurably.  The transmission and amplification effects of ubiquitous social and news media mean that corporate intent and external effect can differ diametrically.  ‘Control’ in the traditional sense is meaningless.

Any of us can relate this description to a recent failure with which, for whatever reason, we are familiar; in a sense it is a generalised ‘pathology of a crisis’.  Until we can (i) understand the different crisis pathologies, (ii) imagine approaches to the management of risk that can provide a measure of anticipation of, and resilience and responsiveness to, rapidly escalating crises and (iii) provide interventions that address the crisis pathology as it really happens, risk management will not meet the (perhaps inflated) expectations of management – and of post-event parliamentary oversight functions.

The starting point is to understand what really happens – by looking at what has already really happened – in practice.  We need robust and meaningful review of real risk incidents (of which there are plenty) and the structure and performance of the risk management systems in situ in the host organisations at the time (of which there are almost none outside major hazard and loss-of-life events).  Sweeping such crises under the closest carpet – and thereby failing to understand the pathology of the crisis – misses an essential learning opportunity each time it happens.  The resulting review need not be too uncomfortable for the host; on the contrary, it provides an ideal opportunity for some organisational honesty and denouement in a low-threat environment; in one sense, the greater the involvement of the host, the better – as long as it does not extend to defensive veto of post-event analysis.

Such an approach would augment enormously our understanding of the ERM systems that matter: not periodic review of databases of known risks, but the messy reality of the operation of organisations’ routines for managing uncertainty, building resilience and identifying and responding to a potentially chaotic operating environment – internally and externally.  From such an exercise, far more realistic prescriptions for risk management routines would emerge.

Inability to imagine an alternative is not a good enough reason to stick with an ineffective status quo.  There are meaningful alternatives which must be given the opportunity to show their worth.  The current situation, where they are crowded out by a convenient but simplistic risk management orthodoxy, serves no-one.