For good measure: A methodology for emergency management performance measurement

By Bethany Moore, MPA, CEM

Introduction

In today’s era of accountability, it has never been more important for emergency managers to demonstrate their progress towards achieving program goals and desired outcomes. Performance measurement systems are designed to do just that – to allow an entity to establish a baseline (where are we now?), to track and measure progress (how far have we come?) and to provide evidence of outcomes achieved (what have we accomplished?). Establishing that the work we do is making a difference is important both from an organizational perspective and for the field of emergency management overall. From an organizational perspective, the ability to provide evidence of achievements could mean the difference between securing or losing program funding. Performance measurement also benefits the field of emergency management by providing evidence to support further investment in initiatives that have achieved measurable results for organizations and communities. The field of emergency management is several years away from standardized, industry-wide performance measures, but many organizations are facing pressure to establish performance measures in the short term. In this article, I describe a methodology for designing a performance measurement system in emergency management.

My research and experience in the area of performance measure is from the health care emergency management perspective. The health care perspective is useful in this context because there are many lessons that emergency managers can draw from the health care field in relation to performance measurement. In health care, performance measurement is highly structured and the standards for data and reporting are some of the highest of any field of practice. As an entity operating within the Canadian health care system, Alberta Health Services (AHS) tracks and measures its performance in cooperation with two external entities that set out specific goals, outcomes and standards for measurement and data: the Canadian Institute for Health Information and the Health Quality Council of Alberta (Alberta Health Services, 2017). Unfortunately, the kind of nationally recognized, evidence-based and outcome-focused performance measures that have become the norm for health care organizations do not yet exist for emergency management. So, when we set out to design a robust performance measurement framework for emergency management within AHS, the challenge was steep. Our performance reporting would sit next to that of clinical areas where tracking and measurement of performance is designed to meet the highest levels of scrutiny, leaving us to wonder, how would we measure up?

Tying performance to improved outcomes

One of the key challenges to performance measurement in the field of emergency management is a lack of performance data that directly ties our performance to improved outcomes in actual emergency and disaster events. In health care, randomized controlled trials and cohort studies allow us to objectively assess the impact of operational changes that improve or worsen patient outcomes. The nature of disaster events makes it challenging to do the same from an emergency management perspective in assessing the impact of emergency preparedness initiatives.

The realization of our emergency preparedness efforts occurs only in the aftermath of infrequent emergency and disaster events that unfold in complex, unpredictable ways. The outcome of any given disaster results from a combination of factors that include both performance-related elements, such as preparedness initiatives and response effectiveness, and non-performance related elements, such as the timing and severity of the event itself. Even the most robust post-incident analysis is rarely able to directly link outcomes such as the number of deaths resulting from a particular disaster to specific performance-related factors. No two disasters are the same, and even when an organization or jurisdiction faces similar events multiple times, or similar events occur across multiple jurisdictions, assessments of how well an organization performed can vary widely depending on the perspective. Collecting valuable performance-related evidence during and post-disaster is possible but requires robust performance measurement processes that can establish direct links between your performance and observed outcomes.

Forward-thinking emergency managers today have an opportunity to be at the forefront of performance measurement in emergency management by designing, implementing and refining performance measurement systems that will help to build a foundation for an industry-wide, evidence-based and outcome-focused performance measures framework. To do this, I propose a five-step methodology for practitioners to establish an emergency management performance measurement system for their organization.

Step 1: Establish a Framework

A framework is essential as the basis for how you will assess performance. The purpose of a framework is to set out the broader organizational goals that will form the basis for the performance measures that you will put into place.

Performance measurement frameworks can take many forms, and are often built around a pre-established, organization-specific vision and mission. For example, a community jurisdiction may establish a broad vision to be the greenest city in Canada. Since achieving this vision requires the participation of all city departments and services, organizational direction may require the emergency management program to focus its performance measurement on this vision and any associated strategies set out by the organization. This forms the framework for development of program-specific performance measures.

For emergency management programs that have more freedom in establishing a framework for performance measurement, there are numerous approaches that can and have been successfully adapted to emergency management. However, a review of the literature in this area reveals that there is no single approach that is widely accepted across the industry. As performance measurement in emergency management matures, tried and tested and commonly-accepted frameworks will emerge. For now, though, most organizations will need to tailor existing approaches to meet organizational needs.

Here, I focus on only two common approaches to performance measurement that can be adapted to any emergency management program: an industry standards approach, and a Balanced Scorecard (BSC) approach.

Industry Standards Approach

In some organizations, particularly those that operate in heavily regulated environments, compliance with industry standards is viewed as paramount to performance. This approach has also been endorsed in the literature as an effective framework that also has the benefit of allowing for benchmarking to compare across organizations (Office of the Auditor General…, 2010). If your organizational culture places a high value on industry standards, this approach may be the best fit to gain buy-in for your performance measures.

The challenge with using industry standards as a performance measurement framework, is that standards are designed to assess a current state or snapshot in time rather than provide a basis for measuring progress. Further, most industry standards focus on structure (such as establishing an emergency management program or dedicated position), process (such as conducting a planning meeting) and outputs (such as completing a response plan) rather than the outcomes these activities are seeking to achieve. The focus on structure, process and output in terms of whether or not the organization meets the criteria set out in standards typically leaves little room for assessment of quality and effectiveness.

The Canadian Standards Association’s Z1600 standard for Emergency Management and Business Continuity Programs is one of the most widely used standards in emergency management in Canada and has been adapted for performance measurement purposes in Alberta Health Services. While the standard offers a holistic view of emergency management that is applicable across organizations of all types and sizes, its generalizability also presents challenges in using it to assess performance in a specific organization or jurisdiction. Without establishing an organization-specific interpretation, it can be difficult to establish consensus on what exactly full compliance with each clause in the standard entails.  Further, many elements of the standard offer little in the way of guidance for organizations to assess the actual impact of their efforts. As a result, if you plan to use an industry standard as the framework for performance measurement, be aware that standards cannot just be taken at face value for performance measurement purposes. For organizations that are willing to put in the work to interpret and adapt industry standards, they can be a useful framework as the basis for performance measurement in emergency management.

Balanced Scorecard (BSC) Approach:

In contrast to an industry standard approach, the BSC approach provides maximum flexibility to an organization in establishing a framework for performance measurement. The BSC is one of the most commonly used and widely researched frameworks for performance measurement in both the public and private sectors (Mackay, 2004; Micheli & Kennerley, 2005). A traditional BSC, developed for the private sector, sets out four perspectives of performance: financial, customer, internal business process and learning and innovation. Each of the perspectives is linked to a common strategic goal. The perspectives that are used in a BSC are often adapted to organization-specific priorities and may include fewer or more than four perspectives. In the emergency management context, this may take the form of quadrants representing the four phases of emergency management, or organization-specific program objectives (see Figure 1). A BSC approach also allows organizations to easily integrate new and innovative perspectives into their performance measurement, such as inequity reduction or equity making in how the organization prepares for and responds to disaster.

Figure 1: Example adaptation of a Balanced Scorecard to emergency management

(Moore, 2016)

 

In a BSC, all of the perspectives are linked together with causal relationships that set out what must be achieved in one perspective to enable achievement in another perspective. A BSC provides a framework for establishing multiple, linked perspectives of performance, but does not dictate the specific measures of performance. As a result, organizations using a BSC have a great deal of flexibility to not only decide where they are focusing the spotlight in looking at performance (the perspectives), but also what specific measures they will use to assess that performance.

Step 2: Develop Performance Measures

Once you have established a framework for performance measurement, the next step is to determine how you will measure performance within that framework. Performance measures are often a combination of two things: Indicators, which quantify a specific element of performance, such as number of staff trained, percentage of plans completed, or score on a stakeholder evaluation survey; and Metrics: which are a combination of indicators that, taken together, indicate progress towards a particular objective.

Whether you choose an industry standard approach or a BSC approach, you will need to develop indicators for each clause of the standard, or each BSC perspective, that set out measurable aspects of performance. For example, a BSC perspective focusing on stakeholder engagement may include a handful of metrics such as, internal stakeholder engagement, external stakeholder engagement, and public engagement. Each of these metrics should have one or more indicators associated with it. For external stakeholder engagement, this may include metrics looking at frequency of stakeholder meetings, stakeholder participation in emergency exercises, and the results of an annual stakeholder engagement survey.

Using proxy measures to measure inequity

Because of the difficulties in directly linking emergency management performance to disaster outcomes, we often rely on proxy measures as the next best thing. A proxy measure is “an indirect measure of the desired outcome which is itself strongly correlated to that outcome” (GovEx, 2019). In emergency management, proxy measures are necessary to capture performance in achieving some of the most important emergency management outcomes, such as reducing the amount of suffering or destruction that results from an event. In these cases where direct measurement is not possible, we instead establish indicators that have strong evidence of being linked to actual desired outcomes.

Reducing inequity in disaster is one example of an outcome that may require proxy measures to assess from a performance perspective. Knowing that the most vulnerable individuals are disproportionately impacted by disaster, a jurisdiction may establish a performance goal to reduce inequity in disaster for its population. To measure this, they would need to look first at structure, process and outputs, and then at outcome, or proxy outcome, measures. Examples of indicators may be: a structural indicator assessing evidence that a working group to tackle the issue has been established; process indicators counting frequency of meetings or stakeholder and vulnerable population attendance at those meetings; and an output indicator for the development and approval of a jurisdictional plan for how inequity will be addressed in disaster.

Measuring what is most important

True outcome measures in this example would require data collection across multiple disasters where impacts are widespread across both vulnerable and non-vulnerable segments of the population. Even then, it would be difficult to establish that improvements in equity were a direct result of reducing inequity initiatives. Since true outcome measures are not possible, the jurisdiction would need to establish proxy measures for the goal of reducing inequity. For example, knowing that individuals of lower socioeconomic status have lower levels of individual personal preparedness (Ablah, Konda & Kelley, 2009), the jurisdiction could establish a proxy measure based on a survey of household preparedness, comparing low income and high income areas. The jurisdiction could also collect valuable performance data during disaster showing trends in how both vulnerable and non-vulnerable populations access services in disaster.

Quality over quantity is important when it comes to developing performance measures. An outcome-based approach means measuring aspects of performance that are more resource-intensive in terms of data collection, so it is important to focus your efforts on indicators that provide the most valuable information. An effective performance measurement system does not measure everything, it measures what is most important. Figuring out what is most important to your organization based on strategic goals for the program is an essential step in identifying the performance measures you will use.

Step 3: Collect Data

All performance measurement requires collection of data. In some cases, this may be data you are already collecting that you can analyze for performance measurement purposes. In other cases, new data collection and analysis processes may need to be established. For example, organizational performance in emergency exercises is a common measure that organizations seek to use in emergency management. Exercises, after all, are the closest thing to a real event that we can replicate in a scheduled, structured way. However, the subjective nature of most post-exercise reports makes them difficult to use in performance reporting if you want to establish quantifiable trends over time. To do so, you would need to implement consistent evaluation processes across every exercise to ensure you are collecting the same information in the same way.

Some of the most valuable data that you can collect will require complex data collection processes. For example, community jurisdictions often establish goals of a more resilient population. Assessing the population impacts of program performance significantly increases the complexity of the performance measurement design. It means that you must collect data not only internally within your program, but also from external stakeholders and others whose behavior you are seeking to change. While this is more resource-intensive than a program-based approach, the benefit is that you are gathering performance data that is meaningful not just within your program or organization, but to the wider community or group that you serve.

Assessing progress towards actual population outcomes also requires routinely collecting data from the public in the form of surveys or other resource-intensive initiatives. This kind of data collection involving the public often also requires a privacy impact assessment and other ethical considerations. In these cases, it is important to consider the feasibility of data collection at the outset when you are establishing performance measures. Consider cost and other resource constraints that may limit your ability to collect complex data. Meaningful, outcome focused data can be gathered as part of your routine work, but it requires some effort at the outset to implement the operational practices needed to ensure you are collecting accurate data that will provide you with meaningful performance information.

Step 4: Establish Reporting

With a framework in place and performance measures and data collection procedures established, you must then determine how best to report performance. In cases where your performance measurement framework has been pre-established by the organization, reporting may also be required to follow a particular format. In other cases, you will need to determine how best to present the data so that it achieves the goals set out for the performance measurement. An industry standard approach is often suited to reporting by compliance percentages, while a BSC approach provides more flexibility in how you report on performance.

Consider the audience for the performance reporting – do they want performance data that can easily be summed up in a short presentation with highly visual graphs? Or do they want to be able to explore the performance data in detail to help inform strategic planning? Understanding the needs of your audience will help to guide the type and frequency of reporting.

Regardless of how you decide to report on performance, it is important to be fully transparent about the basis for the performance information that you are reporting. In many cases, particularly with an industry standards approach, you may need to establish weighting across the performance measures. If you have established 100 performance indicators based on an industry standard, reporting only the compliance percentage does not tell the whole story. The clauses in industry standards can range in impact from highly insignificant, to an essential component of organizational performance. Knowing that not all 100 indicators contribute equally to achieving desired outcomes, you will either need to apply weighting to ensure that you are providing an accurate representation of performance in your reporting; or choose a handful of key performance indicators from the standard to focus on and provide in-depth reporting on those.

Step 5: Monitor and Maintain

Performance measurement is not a ‘set it and forget it’ system. Organizational priorities often change, which may require you to adjust to a new framework or shift direction towards new strategic goals. Despite these shifts, much of the work involved in identifying outcomes and developing indicators can be easily adapted to a new framework or approach as needed.

Indicators should be reviewed and refined frequently, particularly when organizational processes change. Issues with data collection may also arise, requiring you to rethink your approach or adjust indicators. Finally, any weighting that you have applied for reporting purposes should also be continually re-assessed to ensure that performance reporting is accurate.

Conclusion

Performance measurement is one of many tools that emergency managers can use to demonstrate value, to aid in strategic decision making and to meet stakeholder expectations. Performance measures alone do not necessarily improve performance, but as Rozner (2013) points out, “they do provide ‘signposts’ that signal progress toward goals and objectives as well as opportunities for improvement.” The value that you get out of performance measurement will largely depend on the work that you put in finding meaningful measures and collecting robust data.

The kind of performance measurement that we see in the health care field did not happen overnight – it is the product of decades of refinement, use and implementation by hundreds of organizations. It’s built into the everyday practice of front line health care providers, and supported by organizational commitment to collecting defensible data. The field of emergency management has a long way to go towards industry-wide performance measures, but there are numerous approaches that you can take today to build useful and defensible performance measures for your emergency management program.  Further, efforts today will create a body of practice that will one day help to form the building blocks of benchmark standards across the industry.

 

Author Bio:

Bethany Moore is a health care emergency manager with a clinical background in Emergency Medical Services (EMS). She is a Senior Consultant with Alberta Health Services where she leads the design and implementation of organizational performance measures for the department of Emergency/Disaster Management. Bethany holds a Master of Public Administration degree from the University of Victoria, where her research focused on performance measurement in health care emergency management, and is a Certified Emergency Manager (CEM). She is also an instructor with the Northern Alberta Institute of Technology (NAIT) Disaster and Emergency Management Program. She can be reached at Bethany.Moore@ahs.ca

References

Ablah, E.A., Konda, K.M., & Kelley, C.L. (2009). Factors predicting individual emergency preparedness: a multi-state analysis of 2006 BRFSS data. Biosecurity and bioterrorism : biodefense strategy, practice, and science, 7 3, 317-30.

Alberta Health Services. (2017). Performance Measures. Available at:  http://www.albertahealthservices.ca/performance.asp

GovEx (2019). Proxy Measure. Johns Hopkins Center for Government. Available at:  https://govex.jhu.edu/wiki/proxy-measure/

Mackay, A. (2004). A Practitioner’s Guide to the Balanced Scorecard. The Chartered Institute of Management Accountants Research Foundation. Available at:  http://www.cimaglobal.com/Documents/Thought_leadership_docs/tech_resrep_a_practitioners_guide_to_the_balanced_scorecard_2005.pdf

Micheli, P., & Kennerley, M. (2005). Performance measurement frameworks in public and non-profit sectors. Production Planning & Control, 16(2), 125–134. http://doi.org/10.1080/09537280512331333039

Moore, B. (2016). The preparedness continuum: Key performance indicators for the Alberta Health Services Emergency/Disaster Management Program. Available at: https://dspace.library.uvic.ca/handle/1828/7357

Office of the Auditor General of British Columbia (Ed.). (2010). Guide for developing relevant key performance indicators for public sector reporting. Victoria, B.C: Province of British Columbia. Available at: http://www.bcauditor.com/sites/default/files/publications/2010/report_10/report/OAGBC_KPI_2010_updated.pdf

Rozner, S. (2013). Developing Key Performance Indicators: A Toolkit for Health Sector Managers. United States Agency for International Development. Available at:

https://www.researchgate.net/file.PostFileLoader.html?id=5523685aef9713475e8b458f&assetKey=AS%3A273751344648192%401442278814306