You are here
Monitoring and Evaluation in climate change adaptation
Monitoring and evaluation (M&E) are key stages in the adaptation process that are important as a means to demonstrate effectiveness and accountability. There are particular challenges associated with M&E of adaptation, related to the long timescales of climate change and its impacts.
At a glance
Monitoring and evaluation (M&E) is critical in ensuring the long-term success of climate adaptation initiatives, plans and actions. It plays an important role in three aspects of adaptation.
- Tracking the performance of activities undertaken during the development of an adaptation plan (e.g. stakeholder engagement activities).
- Tracking pre-identified risk thresholds/trigger levels which identify when new adaptation actions should be undertaken.
- Determining whether planned outputs and outcomes from adaptation actions have been achieved.
M&E helps demonstrate accountability of local government, industry and other coastal managers to their constituencies. This is important for leveraging continued community support for adaptation initiatives, and for demonstrating that taxpayer and investor funding has been spent wisely. This can help to ensure ongoing support for actions and any further funding that may be required. It can also improve performance through evaluation of efficiency and effectiveness, supporting adaptive management.
M&E is a key stage in the C-CADS process, see C-CADS 6: Monitor and evaluate.
Main text
1. How is monitoring and evaluation of adaptation plans different to regular M&E?
There are three principal differences between M&E for traditional management purposes and for adaptation planning.
- The time frames associated with climate change and the associated adaptation outcomes are likely to be longer than those of traditional management programs.
- There is uncertainty associated with the magnitude and nature of climate change, particularly at the local level, which can influence monitoring results and the ways in which they are evaluated.
- A continually changing climate means that traditional approaches to measuring change, such as comparing monitoring results to static baseline conditions, may not be possible, and a moving baseline must be considered.
Box 1: What is monitoring and evaluation?
Monitoring is a continuous or periodic process in which data on specific indicators are systematically collected to provide information about performance of a project.
Evaluation is a systematic and objective feedback of a completed or ongoing action, aimed at providing information about design, implementation and performance.
2. Selecting indicators and linking them to objectives
During the development of an adaptation plan, you need to select clear and measureable indicators to monitor performance of each planned action. These indicators can help to:
- track the performance of activities undertaken during the development of an adaptation plan (e.g. stakeholder engagement activities)
- track pre-identified risk thresholds/trigger levels which identify when new adaptation actions should be undertaken
- determine whether planned outputs and outcomes from adaptation actions have been achieved.
Guidance on indicator selection can be obtained from Identifying Indicators.
Evaluation of approaches, activities and outcomes should be done at each stage of planning and implementation. For example, if developing an engagement strategy in step 1 of a planning cycle, such as C-CADS 1: Identify the challenges, you should include M&E indicators to evaluate the success of your approach. This can help to make any necessary adjustments to the process.
3. Designing an effective monitoring and evaluation (M&E) program
Design of an M&E program is critical to ensure that information generated through the program is used to:
- inform decision-making
- make appropriate adjustments to adaptation planning (if required)
- report to stakeholders and decision makers.
Without a clear link between monitoring and decision-making, there is a risk that activities and resources for monitoring will be seen as a drain on resources, which can be discontinued.
Accordingly, M&E should be designed with:
- clear objectives in mind
- clear processes for analysis and reporting of the results of monitoring
- transparent and objective evaluation methods.
Despite the long timeframes associated with climate change, it may be sufficient to monitor indicators in the shorter term to demonstrate that appropriate plans are in place and some of the adaptation actions have been implemented.
Monitoring can be expensive and as a result it is often neglected by decision makers. A well-designed monitoring program can reduce waste and help ensure that analysis and conclusions are robust. It is often important to engage experts to assist at the M&E design stage.
4. Broad categories of adaptation monitoring
In general, three broad categories of monitoring are required in an adaptation process. They are:
4.1 Monitoring of work undertaken in preparing an adaptation plan
There is a variety of activities that need to be undertaken during the development of an adaptation plan (Overview of C-CADS). The performance of these activities should be assessed and evaluated to ensure they can be improved in future iterations, or to help others learn and build from what has been done before.
As an example, one simple but effective way to make use of monitoring results is to record lessons learned while conducting stakeholder engagement around risk identification, and using the learning to design future consultation in adaptation planning.
Ensuring monitoring is not overly intensive, expensive or time consuming is important if it is to be a help and not a hindrance.
4.2 Monitoring trigger points for implementation of adaptation actions
The uncertainty associated with climate change, including magnitude and timescales of impacts, together with the sometimes conflicting values of communities and organisations, make it difficult to determine exactly when actions should be implemented. This can be addressed by sequencing adaptation actions based on observed changes or availability of more accurate projections, rather than according to a rigid pre-defined time frame.
To implement this approach, you can identify and monitor trigger indicators which, when reached, can stimulate the implementation of the next action in a sequence. In identifying trigger indicators, it is important to consider the time required for the decision to be made and implemented. Time should be allowed for effective stakeholder engagement, or for any necessary investigation, design and development of actions.
Trigger indicators should be robust to natural climate variability (e.g. not triggered by a single extreme). As much as possible, they should be measurable and understandable to all stakeholders. This will help with getting the social license to implement actions.
Indicators can be physical, environmental, social or economic. Examples include:
- physical –the number of flooding events over a certain period
- environmental – the number of mangrove seedlings per square meter of saltmarsh
- social – the level of satisfaction with beach visits
- economic – the costs of insurance premiums.
Additional discussion and information on rationale and use of triggers are provided in Section 8 of Information Manual 4: Costs and Benefits and What is a pathways approach to adaptation? , and Applying a pathways approach.
4.3 Monitoring performance and outcomes of adaptation actions
Adaptation plans should contain measureable objectives together with indicators for each of the objectives. Monitoring programs need to collect appropriate data on each indicator and assess these against a baseline or reference conditions. Depending on the types of indicators that are being used, a variety of data will need to be collected. This may include biophysical and socio-economic information. Table 1 shows some types of indicators together with an explanation of their purpose.
The selection of indicators and the design of measuring techniques should both consider how the data and information will be analysed to support decision making. It may be possible to collect some baseline information or to select reference sites (controls) for comparison. It may be necessary to seek expert input in developing your monitoring plan, as inappropriate monitoring can result in bad decisions.
Monitoring should include information on:
- outputs – implementation of the planned adaptation action (e.g. has the seawall been built?)
- immediate and short-term outcomes (e.g. has the seawall reduced storm surge inundation to the target levels?)
- longer term outcomes (e.g. after 10 years, is the seawall reducing inundation to the target levels for that date?).
Important considerations in planning M&E for adaptation actions include the following factors.
- Scale of monitoring should include consideration of ecological, landscape, jurisdictional and management scales.
- Time horizons, which affect when data are collected, and over what timeframe. Many actions include a ‘lag’ before results can be detected (e.g. a change in bank stability due to planting).
- Timing of data collection needs to ensure data comparability over time and between variables (e.g. climate and tourism data are seasonal, but the seasons differ, and this must be accounted for in M&E design).
5. Undertaking monitoring
Box 2 summarises some key points about M&E program design.
Box 2: Practical tips for designing an M&E program
For the strategy and program scales, Thomsen et al. (2014) provide detailed monitoring and evaluation guidance for assessing the quality of adaptation planning in a holistic way, including planning characteristics (integrated, equitable, sustainable, informed, and responsive). They also provide guidance on capacity assessment that is valuable for evaluating options.
Subjective terms like ‘adequate’ and ‘sufficient’ can be useful in qualitative assessments. When an assessment is conducted in a group situation, these terms can be important in facilitating discussion and reaching agreement about the quality and nature of an indicator. They also allow comparison across multiple projects by enabling assessment to reflect individual contexts.
It is better to avoid words like ‘increase’, ‘decline’ and ‘change’ in measures. If you include these words, you will not have an indication of the magnitude of change, and therefore will not be able to judge the implications of a change in the status of the indicator. However, you can use these words as part of a performance rating system.
Irrespective of whether measures are quantitative or qualitative, a performance rating systems can be used for reporting purposes, e.g. meeting or exceeding desired outcome or trend, moving towards desired outcome or trend, limited changes towards desired outcome or trend, not meeting desired trend and showing signs of decline, or data unavailable for reporting period.
5.1 Collecting the right data
The choice of data to collect will depend on the nature of the project, its intended outcomes and what is feasible and cost-effective to collect.
Monitoring inherently means dealing with data. It is usually assumed that we need precise measurements to inform our decisions. However, managers are often comfortable with less precision, depending on the type of decision they are making. In reality, there are often trade-offs between precision and cost and timeliness of data collection. Environmental managers make accurate estimates against categorical assessment scales based on their experiences (Cook et al. 2014) and this may be sufficiently precise for some types of decisions.
5.2 Issues of scale
Choosing the appropriate scale, or balance between scales of choice, is important when measuring adaptation initiatives, given that this can affect the appropriateness of indicators and the monitoring design. Sometimes, monitoring information from one scale (e.g. individual projects) can be collated to inform evaluation at another scale (e.g. program scale), but sometimes additional or alternative indicators are needed, for example because there are different targets (e.g. a program seeking to build greater institutional or financial support).
Kennedy et al. (2009) identifies multiple scales at which monitoring and evaluation can be conducted, including:
- ecological scale – ecosystem (e.g. rainforest), community (e.g. sub-tropical), species (e.g. cassowary)
- landscape scale – catchment, river, stream, creek
- jurisdictional – international, national, state, region, local government
- management – policy or strategy, program, project, activity
- temporal scale – short, medium, long term
Adaptation actions vary across sectors and by intervention type. Intervention types can be divided into categories of activities: knowledge-based (generating and using relevant information including the assessment of risk and vulnerability), capacity-based (creating the internal capability within target communities and organizations to perceive and evaluate climate risks and to formulate responses), and focal area technically-based (designing and implementing specific measures to manage risk and enhance resilience, both hard and soft measures relevant to the focal area).
Table 1: Example from an adaptation plan showing indicators, associated measures, and an explanation of data that could be collected to assess change over time.
Indicator example |
Matched measure example |
Explanation |
Political support for infrastructure protection |
Extent of political support, e.g. for set-backs |
Qualitative data (e.g. survey) – consideration should be given to assessment both directly after and between events (when adaptation planning needs to occur) |
Adequacy of planning in addressing these issues |
Adequacy of plans for addressing required infrastructure upgrades |
Qualitative scores for each. For example:
|
Adequacy of plans for shifting core business activities away from vulnerable areas |
||
Adequacy of resources (information, finances) to implement plan |
Availability of funding to support changes (a) in storm water drain capacity, and (b) required to relocate core/ new business activities |
|
Extent of legal, policy and constituent support to undertake actions |
Extent that legal and other mechanisms support alternative locations for business activities |
|
Extent of vulnerability of coastal infrastructure |
Percentage of most vulnerable infrastructure upgraded to meet existing and forecast challenges |
Quantitative data; categories could also be used, for example: all, most, some, none |
Changes in impacts associated with storm surge |
Changes in frequency, intensity and location of storm surge events after adaptation activities have been undertaken |
Quantitative geo-referenced data, collected over a one year timeframe post action (matches challenge data) |
5.3 Gathering and using monitoring information
It may not be necessary for all monitoring and evaluation data to be collected by a single organisation. In many cases, detailed community-collected data exist, as do data collected by business, non-government organisations, Catchment Management Authorities/NRM groups, Federal and state governments and other groups. Often the most challenging aspect of monitoring programs is to understand what data are already being collected, how they might be shared, and whether it is possible to adapt measures in some way to make use of already available resources. For example, a recent review of climate related monitoring and evaluation by local government (Thomsen et al. 2012) found at least five existing reporting requirements in NSW and four in Queensland with data that were directly relevant to climate adaptation monitoring and evaluation.
Councils should be clear about who is responsible for their assessment, the timing and frequency of assessment, who will collect data and how they will be stored and analysed. Being clear about monitoring purpose and thinking carefully about whether frameworks, indicators, measures and data collected will ensure programs remain within budget and capabilities, are therefore critical before embarking on a detailed monitoring program.
5.4 Analysing and using monitoring and evaluation data
The type of analysis required for data will depend on the nature of the data that are being collected and the design of the monitoring program that has been undertaken. It is important that monitoring results are used appropriately and that they are not used to support poor decision-making. We strongly recommend that users work with experts to analyse and interpret monitoring results appropriately. This is particularly important in climate change adaptation where the multidisciplinary nature of the discipline will generally result in a range of qualitative and quantitative variables being collected. Each of these will require different analytical approaches, some simple and some not.
In considering analysis, it is useful to use approaches that will be clear to stakeholders. The more complex the analyses, the greater the challenge of communicating results to stakeholders. We provide guidance about working with consultants (see Using consultancies).
Data analysis needs to be conducted in a way that ensures managers use the results. For quantitative measures, statistical analysis should be used to discern between trends and anomalies, and to be clear, where possible, about cause-effect relationships. Decision support tools such as multi-criteria analysis (MCA) and Bayesian models have been used in climate adaptation planning, but they require more specialist data analysis skills. Other similarly specialist techniques include fuzzy-cognitive modelling, or multi-variate data analysis techniques such as pattern analysis.
In monitoring programs, data and information are often compared to reference or baseline conditions to assess the degree of change resulting from a management action. A key consideration when doing this is that baselines may not remain the same in a climate affected future. Changing climate may result in changing reference points. In designing monitoring programs and undertaking analysis this should be taken into account
5.5 Evaluation
Evaluation of adaptation planning includes the need to track actions that are being undertaken and provide feedback to relevant stakeholders regarding the success of actions and any adjustments that may be required. Evaluation also includes considering and assessing a project or program as a whole, and determining whether the objectives of the program were achieved. Monitoring of indicators provides the data that underpin evaluations, but it is analysis and interpretation of data that develops knowledge. It is this knowledge that can support decisions about whether objectives in an adaptation plan are being achieved.
There are a variety of ways in which evaluations can be conducted. It is useful to take an approach that ensures evaluations are done with sufficient rigour and logic, and also considers the data and information that have been collected. Villanueva (2011) identified four types of evaluation (Table 2):
- input-output based evaluations/outcome, impact or results evaluation
- process based evaluation
- evaluation of behavioural change
- economic evaluation.
More information on each of these is provided here.
Table 2: Approaches and methodologies for evaluating adaptation interventions. Source: Villanueva 2011, p. 20.
M&E methodologies |
Focus on |
Approach |
Assumption |
Input-Output-Outcome evaluation |
Effectiveness |
Elements of adaptive capacity/risk are predetermined and evaluated against a set of indicators |
Increased adaptive capacity will ultimately lead to reduced vulnerability Risk probabilistically determined and known |
Process-based evaluation |
|||
Evaluation of behavioural change |
|||
Economic evaluation |
Efficiency |
Benefits of adaptation are measured in terms of economic loss |
The ability to determine a baseline and projected benefits and losses |
6. Communicating with stakeholders
Communicating with key stakeholders, including the community, is an important way of building political and constituent support for climate adaptation programs and projects.
Reports are often filled with detailed statistical data and graphs. Without careful explanation, the core findings from these data may be lost to general readers, some of whom may be important stakeholders for the project or program.
Some useful things to consider in communicating M&E results are using:
- a similar scale to report across an indicator set
- a 3 or 4-point scale. You could use words like ‘very good’, ‘good’, ‘poor’, ‘very poor’ to describe your scale (as in the Great Barrier Reef Outlook Report 2009), or ‘positive’, ‘of concern’, and ‘action required’ (as in the Sunshine Coast Council Sustainability Indicators Report) or you could develop your own wording (as in NSW State of the Parks Report 2004) (see list of example M&E reports at end for links).
- colours to provide a simple grading schema, although the choice of colour is important – red may be inappropriate.
- a scale system associated with targets, for example ‘target met’, ‘target not met’ (e.g. the La Trobe University Sustainability Report).
You can report on confidence in your indicator score, for example, the National Sustainability Council report (2013) uses shaded circles to indicate adequate evidence/limited evidence for each indicator.
It’s often tempting to include words such as ‘increase’ or ‘decrease’ within an indicator. Indicator wording should be neutral, but you can capture change in the way you report. For example, the National Sustainability Council in its report (2013, for example p. 180) uses an arrow system representing improving, stable, and deteriorating trends.
If you have quantitative data, you might like to present data in a table format.
You might choose to consider summary statements, such as the summaries of impacts on environmental, economic and social values in the Great Barrier Reef Outlook Report (2009). If producing a composite score across a range of indicators, be careful to justify how you came to this score (e.g. whether you are averaging across criteria or whether one criterion is more important than another).
Further information
Examples of M&E reports (all accessed 28 April 2017):
- Great Barrier Reef Outlook Report 2009: http://www.gbrmpa.gov.au/managing-the-reef/great-barrier-reef-outlook-report/outlook-report-2009.
- Sunshine Coast Council Sustainability Indicators Report: http://www.usc.edu.au/media/936780/Sustainability-indicators-2012_Part-1.pdf.
- La Trobe University Sustainability Report: http://www.latrobe.edu.au/__data/assets/pdf_file/0005/576689/New-2013-Sustainability-Report-Impacting-Futures.pdf.
- National Sustainable Australia Report: http://www.environment.gov.au/system/files/resources/e55f5f00-b5ed-4a77-b977-da3764da72e3/files/sustainable-report-full.pdf.
- NSW State of the Parks Report: http://www.environment.nsw.gov.au/sop04/summarysop04.htm
Source material
Cook, C.N., R.W. Carter, and M. Hockings, 2014: Measuring the accuracy of effectiveness evaluations of protected areas. Journal of Environmental Management, 139, 164-171.
Kennedy, E.T., H. Balasubramanian, and W.E.M. Crosse, 2009: Issues of scale and monitoring status and trends in biodiversity. New Directions for Evaluation, 122, 41-51.
National Sustainability Council, 2013: Sustainable Australia Report 2013: Conversations with the future. Canberra, DSEWPaC. Accessed 1 June 2016. [Available online at https://www.environment.gov.au/sustainability/publications/sustainable-australia-report-2013-conversations-future].
Thomsen, D.C., T.F. Smith, N. Keys, 2012: Adaptation or Manipulation? Unpacking climate change response strategies. Ecology and Society, 17(3), 20.
Thomsen, D.C., and Colleagues, 2014: A Guide to monitoring and evaluating coastal adaptation, 2nd edition. Sydney Coastal Councils Group, Australia. Accessed 1 June 2016. [Available online at http://www.sydneycoastalcouncils.com.au/sites/default/files/A-Guide-to-Monitoring-and-Evaluating-Coastal-Adaptation.pdf].
Villanueva, P.S., 2011: Learning to ADAPT: monitoring and evaluation approaches in climate change adaptation and disaster risk reduction – challenges, gaps and ways forward. Strengthening Climate Resilience Consortium. Accessed 8 January 2018. [Available online at http://www.ids.ac.uk/publication/learning-to-adapt-monitoring-and-evaluation-approaches-in-climate-change-adaptation-and-disaster-risk-reduction-challenges-gaps-and-ways-forward].