Enterprise Risk Management Process
Enterprise risk management (ERM) has become very important. The ?nancial world is not immune to systemic failure, as demonstrated by many stories such as Barings Bank collapse in 1995, the failure of Long-Term Capital Management in 1998, and a handful of bankruptcy cases in the current ?nancial crisis, e.g., the federal government’s takeover of Fannie Mae and Freddie Mac and the fall of Lehman fell and Merrill Lynch. There is no doubt that risk management is an important and growing area in the uncertain world. Smiechewicz1 gave a framework for ERM, relying on top management support within the organization. Many current organizations have chief risk of?cers (CROs) appointed, but the effectiveness of risk management depends on active participation of top management to help the organization survive the various risks and crises they encounter. Set risk appetite Risk identi?cation process Identify risks Develop risk matrix Risk management process Risk review processes The ?rst step is to set the risk appetite for the organization. No organization can avoid risk. Nor should they insure against every risk. Organizations exist to take on risks in areas where they have developed the capability to cope with risk. However, they cannot cope with every risk, so top management needs to identify the risks they expect to face, and to identify those risks that they are willing to assume (and pro?t from successfully coping). The risk identi?cation process needs to consider risks of all kinds. Typically, organizations can expect to encounter risks of the following types: ? ? ? ? ? Strategic risk Operations risk Legal risk Credit risk Market risk
D.L. Olson, D. Wu, Enterprise Risk Management Models, DOI 10.1007/978-3-642-11474-8_2, C Springer-Verlag Berlin Heidelberg 2010
2 Enterprise Risk Management Process Table 2.1 Enterprise risk management framework
Is there a formal process to identify potential changes in markets, economic conditions, regulations, and demographic change impacts on the business? Is new product innovation considered for both short-run and long-run impact? Does the ?rm’s product line cover the customer’s entire ?nancial services experience? Is research and development investment adequate to keep up with competitor product development? Are suf?cient controls in place to satisfy regulatory audits and their impact on stock price? Does the ?rm train and encourage use of rational decision-making models? Is there a master list of vendor relationships, with assurance each provides value? Is there adequate segregation of duties? Are there adequate cash and marketable securities controls? Are ?nancial models documented and tested? Is there a documented strategic plan to technology expenditures? Are patent requirements audited to avoid competitor abuse as well as litigation? Is there an inventory of legal agreements and auditing of compliance? Do legal agreements include protection of customer privacy? Are there disturbing litigation patterns? Is action taken to assure product quality suf?cient to avoid class action suits and loss of reputation? Are key statistics monitoring credit trends suf?cient? How are settlement risks managed? Is their suf?cient collateral to avoid deterioration of value? Is the incentive compensation program adequately rewarding loan portfolio pro?tability rather than volume? Is exposure to foreign entities monitored, as well as domestic entity exposure to foreign entities? Is there a documented funding plan for outstanding lines? Are asset/liability management model assumptions analyzed? Is there a contingency funding plan for extreme events? Are core deposits analyzed for price and cash ?ow?
Examples of these risks within Smiechewicz’s framework are outlined in Table 2.1. Each manager should be responsible for ongoing risk identi?cation and control within their area of responsibility. Once risks are identi?ed, a risk matrix can be developed. Risk matrices will be explained in the next section. The risk management process is the control aspect of those risks that are identi?ed. The adequacy of this process depends on assigning appropriate responsibilities by role. It can be monitored by a risk-screening committee at a high level within the organization that monitors new signi?cant markets and products. The risk review process includes a
systematic internal audit, often outsourced to third party providers responsible for ensuring that the enterprise risk management structure functions as designed.
A risk matrix provides a two-dimensional (or higher) picture of risk, either for ?rm departments, products, projects, or other items of interest. It is intended to provide a means to better estimate the probability of success or failure, and identify those activities that would call for greater control. One example might be for product lines, as shown in Table 2.2.
Table 2.2 Product risk matrix Likelihood of risk low Level of risk high Level of risk medium Level of risk low Hedge Control internally Accept Likelihood of risk medium Avoid Hedge Control internally Likelihood or risk high Avoid Hedge Control internally
The risk matrix is meant to be a tool revealing the distribution of risk across a ?rm’s portfolio of products, projects, or activities, and assigning responsibilities or mitigation activities. In Table 2.2, hedging activities might include paying for insurance, or in the case of investments, using short-sale activities. Internal controls would call for extra managerial effort to quickly identify adverse events, and take action (at some cost) to provide greater assurance of acceptable outcomes. Risk matrices can represent continuous scales. For instance, a risk matrix focusing on product innovation was presented by Day.2 Many organizations need to have an ongoing portfolio of products. The more experience the ?rm has in a particular product type, the greater the probability of product success. Similarly, the more experience the ?rm has in the product’s intended market, the greater the probability of product success. By obtaining measures based on expert product manager evaluation of both scales, historical data can be used to calibrate prediction of product success. Scaled measures for product/technology risk could be based on expert product manager evaluations as demonstrated in Table 2.3 for a proposed product, with higher scores associated with less attractive risk positions. Table 2.4 demonstrates the development of risk assessment of the intended market. Table 2.5 combines these scales, with risk assessment probabilities that should be developed by expert product managers based on historical data to the degree possible. In Table 2.5, the combination of technology risk score of 18 with product failure risk score 26 is in bold, indicating a risk probability assessment of 0.30.
2 Enterprise Risk Management Process Table 2.3 Product/technology risk assessment 1 – Fully experienced 2 3 – Signi?cant change X X X X 4 5 – No experience Score 3 2 4 1
Current development capability Technological competency Intellectual property protection Manufacturing and service delivery system Required knowledge Necessary service Expected quality Total
X X X
3 2 3 18
Table 2.4 Product/technology failure risk assessment 1 – Same as present Customer behavior Distribution and sales Competition Brand promise Current customer relationships Knowledge of competitor behavior Total 2 3 – Signi?cant change X X X X X 4 X 5 – Completely different Score 4 3 5 5 5 4 26
Table 2.5 Innovation product risk matrix expert success probability assessments Failure <10 Technology 30–35 Technology 25–30 Technology 20–25 Technology 15–20 Technology 10–15 Technology <10 0.50 0.65 0.75 0.80 0.90 0.95 Failure 10–15 Failure 15–20 Failure 20–25 Failure 25–30 0.40 0.50 0.60 0.70 0.85 0.90 0.30 0.45 0.55 0.65 0.80 0.85 0.15 0.30 0.45 0.55 0.65 0.70 0.01 0.05 0.20 0.30 0.45 0.60
Risk matrices have been applied in many contexts. In the medical ?eld, Blomeyer et al.3 presented a risk matrix for child development, focused on predicting basic cognitive, motor and noncognitive abilities based on the two dimensions of organic risk factors and psychosocial risk factors. McIlwain4 cited the application of clinical risk management in the United Kingdom arising from the National Health Service Litigation Authority creation in April 1995. This triggered systematic analysis of incident reporting on a frequency/severity grid comparing likelihood and consequence. Traf?c light colors are often used to categorize risks into three (or more) categories, quickly identifying combinations of frequency and consequence calling for the greatest attention. Table 2.6 gives a possible risk matrix.
Table 2.6 Risk matrix of medical events Consequence insigni?cant Likelihood almost certain Likelihood likely Likelihood possible Likelihood unlikely Likelihood rare Amber Green Green Green Green Consequence minor Red Amber Amber Green Green Consequence moderate Red Red Amber Amber Green Consequence major Red Red Amber Amber Amber Consequence catastrophic Red Red Red Red Amber
Table 2.6 demonstrates the use of a risk matrix that could be based on historical data, with green assigned to a proportion of cases with serious incident rates below some threshold (say 0.01), red for high proportions (say 0.10 or greater), and amber in between. While risk matrices have proven useful, they can be misused as can any tool. Cox5 provided a critique of some of the many risk matrices in use. Positive examples were shown from the Federal Highway Administration for civil engineering administration (Table 2.7), and the Federal Aviation Administration applied to airport operation safety. The Federal Aviation Administration risk matrix was quite similar, but used qualitative terms for the likelihood categories (frequent, probable, remote, extremely remote, and extremely improbable) and severity categories (no safety effect, minor, major, hazardous, and catastrophic). Cox identi?ed some characteristics that should be present in risk matrices: 1. Under weak consistency conditions, no red cell should share an edge with a green cell 2. No red cell can occur in the left column or in the bottom row 3. There must be at least three colors 4. Too many colors give spurious resolution
2 Enterprise Risk Management Process Table 2.7 Risk matrix for federal highway administration (2006) Very low impact Low impact Yellow Yellow Green Green Green Medium impact Red Red Yellow Yellow Green High impact Red Red Red Red Yellow Very high impact Red Red Red Red Red
Very high probability High probability Medium probability Low probability Very low probability
Green Green Green Green Green
Extracted from Cox (2008).
Cox argued that risk ratings do not necessarily support good resource allocation decisions. This is due to the inherently subjective categorization of uncertain consequences. Thus Cox argues that theoretical results he presented demonstrate that quantitative and semiquantiative risk matrices (using numbers instead of categories) cannot correctly reproduce risk ratings, especially if frequency and severity are negatively correlated.
Information System Risk Matrix Application
Egerdahl6 presented a risk matrix to support data processing audit functions. The purpose was to identify threats facing the environment, the facility components, and appropriate controls. Steps in building the IT auditing risk matrix included: 1. 2. 3. 4. Identify threats and components Identify necessary controls Place appropriate controls in matrix cells Rank and evaluate control adequacy
Threats were potentially adverse events such as lost or corrupted data, outages of system components, theft, or disasters. Example threats included: ? Alteration – unauthorized changes to the system ? Costs – excessive or inappropriate ? Denial of service – destruction, damage, or other events making system unavailable to users ? Destruction – outages of system components ? Errors and omissions – system degradation leading to erroneous output ? Fraud – theft of system component, or access to defraud
Information System Risk Matrix Application
? Regulatory exposure – system performance leading to government or customer suits ? System malfunction – performance other than intended, from bugs, poor design, or other factors ? Unauthorized disclosure – unauthorized access through bypassing locks or passwords Auditors were responsible to identify threats that could occur, and that would be harmful to the ?rm’s achievement of goals and objectives. Components included communication circuits, network software, database ?les, terminals, processing units, and other devices. Examples included: ? Disaster recovery – procedures, components and information to put system back in operation, including disaster recovery plans, contingency plans, backup, offsite storage, secondary recovery sites, personnel, and other elements ? Facility – sites, buildings, and rooms housing system components, as well as drawings and speci?cations, environmental control devices, ?re and ?ood mitigation mechanisms, health and safety codes, and physical security devices ? Hardware – computers, tape drives, disk drives, peripheral equipment, storage media, to include processing units, minicomputers, workstations, and PCs ? Information – data in system or components, to include ?les, applications, databases, transactions and reports ? Network – communication-related equipment and software, including circuits, modems, multiplexers, controllers, communication facilities, software, and security mechanisms ? Operations – personnel and processes to include manuals, documentation, physical and logical access management ? Software – programs to run and maintain the system, to include operating systems and applications software Controls were procedures or physical items preventing threats from occurring or mitigating event impact. 1. Change and problem management – facilities, hardware, software, and communications networks 2. Cost/resource management – ?nancial data 3. Disaster recovery tasks – documented and tested plan for off-site storage of backup data, alternative site provision, power, hardware, software, air conditioning, etc. 4. Environmental controls – ?re, health and safety controls, temperature control, etc. 5. Hardware/software management – vendor support, maintenance plans, standards 6. Inventory controls – equipment and resource accountability
2 Enterprise Risk Management Process
7. Performance goals and objectives – metrics such as resource utilization and lost time 8. Planning and forecasting – proper use of storage, to include planning for growth and upgrades 9. Policies and procedures – directives, codes, regulations, etc. 10. Process monitoring – process control and problem detection 11. Production controls – procedures for backup and recovery 12. Security – devices, techniques and software 13. Separation of functions – separation of duties to deny potential fraud or theft 14. Training and education – enhance job knowledge and security The fourth step involved risk ranking both threats and components by each member of the auditing team, developing an ordinal list of threats and components. The most serious threat was placed ?rst on the threat axis and the most important component placed at the top of the component axis. The cells were divided into High (top 25%), Medium (middle 50%), and Low (bottom 25%) categories, and colors applied to aid identi?cation. Controls were then assigned to each cell. As an example, threats could be ranked as follows by the auditing team, with lower numbers indicating more important threats: 1. 2. 3. 4. 5. 6. 7. 8. 9. Outages of system components Unauthorized disclosure Alteration of system Errors and omissions Excessive costs Fraud or theft System malfunction Regulatory or contractual exposure Denial of service
These threats were categorized by placing the ?rst three in the high risk category, items ranked 4 through 6 in the medium risk category, and the last four in the low risk category. Rankings for component importance could be: 1. 2. 3. 4. 5. 6. 7. Information Hardware Software Network Operations Facility Disaster recover
For each component, controls were assigned by risk category. As a possible example, ranks 1 and 2 might be categorized as critical, 3 and 4 as moderately
Appendix: Controls Numbered as in Text Table 2.8 IT risk matrix Threat low Criticality important Criticality moderate Criticality low Amber Green Green Threat medium Red Amber Green Threat high Red Red Amber
critical, and 5 through 7 as low in criticality. A risk matrix in line with what has been presented in this chapter could be as shown in Table 2.8. This represents a conventional application of a risk matrix. Egerdahl went further, developing a matrix assigning speci?c control actions to each combination of threat and criticality, shown in the Appendix.
The study of risk management has grown in the last decade in response to serious incidences threatening trust in business operations. The ?eld is evolving, but the ?rst step is generally considered to be application of a systematic process, beginning with consideration of the organization’s risk appetite. Then risks facing the organization need to be identi?ed, controls generated, and review of the risk management process along with historical documentation and records for improvement of the process. Risk matrices are a means to consider the risk components of threat severity and probability. They have been used in a number of contexts, basic applications of which were reviewed. Cox provided a useful critique of the use of risk matrices. A more detailed demonstration of risk matrices applied to information system technology based on the work of Egerdahl was presented.
Appendix: Controls Numbered as in Text
Threat System outage Component 1 X X X X 2 X X X X 3 X X X X 4 X X X X X X X X X X X X 5 X X X X 6 7 8 9 10 11 12 13 14 X X X X X X X X X X X
Information Hardware Software Network Operations Facilities Disaster recovery Unauthorized access Information Hardware Software Network Operations
X X X
X X X X X X X X X X X X X X X X
2 Enterprise Risk Management Process
10 11 12 13 14 X X X X X
Facilities Disaster recovery System alteration Information Hardware Software Network Operations Facilities Disaster recovery Errors Information Hardware Software Network Operations Facilities Disaster recovery Excessive costs Information Hardware Software Network Operations Facilities Disaster recovery Fraud or theft Information Hardware Software Network Operations Facilities Disaster recovery System malfunction Information Hardware Software Network Operations Facilities Disaster recovery Legal exposure Information Hardware Software Network Operations Facilities Disaster recovery Denial of service Information Hardware Software Network Operations Facilities Disaster recovery
X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X
X X X X X X X X X
X X X
X X X X X
X X X X X X X X X X X
X X X X X X
X X X X
X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X
X X X X X X
X X X X X X X
X X X X
X X X
1. Smiechewicz, W. 2001. Case study: Implementing enterprise risk management. Bank Accounting & Finance 14(4): 21–27. 2. Day, G.S. 2007. Is it real? Can we win? Is it worth doing? Managing risk and reward in an innovation portfolio. Harvard Business Review 85(12): 110–120. 3. Blomeyer, D., K. Coneus, M. Laucht, and F. Pfeiffer. 2009. Initial risk matrix, home resources, ability development, and children’s achievement. Journal of the European Economic Association 7(2–3): 638–648. 4. McIlwain, J.C. 2006. A review: A decade of clinical risk management and risk tools. Clinician in Management 14(4): 189–199. 5. Cox, L.A., Jr. 2008. What’s wrong with risk matrices? Risk Analysis 28(2): 497–512. 6. Egerdahl, R.L. 1995. A risk matrix approach to data processing facility audits. Internal Auditor 52(3): 34–40.