Guidelines for Evaluation

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

IMPRINT

Investment Management

PUBLISHED BY:

Austrian Development Agency

Zelinkagasse 2, 1010 Wien Tel.: +43 (0)1 90399-0 Fax: +43 (0)1 90399-1290 [email protected], www.entwicklung.at

Management and editorial team:

Sigrid Breddy, Astrid Ganterer, Tonka Eibs Austrian Development Agency

Consultations and drafting:

Lisa Ringhofer, Silvia Weninger www.triple-minds.com

Graphic design and Layout:

Christina Brandstötter www.brandsisters.com

Vienna, July 2020

Order from:

Austrian Development Agency (ADA), Zelinkagasse 2, 1010 Vienna, Austria [email protected], www.entwicklung.at

Photo: © istockphoto/Patacharin Saenlakon

CONTENTS

PREAMBLE …………………………………………………………………………………………………………………………………. 5 LIST OF ABBREVIATIONS …………………………………………………………………………………………………………. 6 I. INTRODUCTION……………………………………………………………………………………………………………………….. 7 II. GUIDING PRINCIPLES…………………………………………………………………………………………………………… 8 III. TYPES OF EVALUATION MANAGEMENT AND ROLES AND RESPONSIBILITIES………11

IV. THE EVALUATION PROCESS IN 15 STEPS ……………………………………………………………………..13 PLANNING, PREPARING AND COMMISSIONING THE EVALUATION…………………………….14

STEP 1: STEP 2: STEP 3: STEP 4: STEP 5: STEP 6: STEP 7

Frame the evaluation interest and use ………………………………………………………………….14 Detail purpose and objectives ……………………………………………………………………………….. 15 De ne key evaluation questions ……………………………………………………………………………. 16 Outline evaluation design and approach ………………………………………………………………. 17 Estimate the budget …………………………………………………………………………………………………. 21 Develop Terms of Reference (ToR)………………………………………………………………………… 24 Select and commission evaluator(s) …………………………………………………………………….. 25

INCEPTION …………………………………………………………………………………………………………………………….26

STEP 8: STEP 9: STEP 10:

Kick-off and clari cation meeting ……………………………………………………………………………26 The evaluation matrix ……………………………………………………………………………………………….26 The inception report ………………………………………………………………………………………………….28

INQUIRY ………………………………………………………………………………………………………………………………….29 STEP 11: Data collection and analysis ……………………………………………………………………………………29

SYNTHESIS …………………………………………………………………………………………………………………………….31 STEP 12: Findings, conclusions and recommendations ……………………………………………………..31 STEP 13: The evaluation report ……………………………………………………………………………………………….32

WORKING WITH THE FINDINGS …………………………………………………………………………………………34 STEP 14: Disseminate evaluation ndings …………………………………………………………………………….34 STEP 15: Coordinate management response and follow-up ………………………………………………35

SUMMARY OF KEY OUTPUTS ALONG THE EVALUATION PROCESS ………………………………..37

3

4

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

ANNEXES …………………………………………………………………………………………………………………………………..38

ANNEX 1: ANNEX 2: ANNEX 3: ANNEX 4: ANNEX 5: ANNEX 6: ANNEX 7: ANNEX 8: ANNEX 9: ANNEX 10: ANNEX 11:

Work ows for Partner-led and ADA-led Evaluations ………………………………………….38 Scoping Exercise ……………………………………………………………………………………………………… 42 Checklist of Documents for Programme and Project Evaluations ……………………43 Quality Checklist for Terms of Reference (ToR)……………………………………………………44 Quality Checklist for Inception Report (IR) …………………………………………………………… 46 Quality Checklist for Evaluation Report (ER)………………………………………………………..48 Template for Evaluation Matrix ………………………………………………………………………………. 51 Template for Feedback Matrix…………………………………………………………………………………. 52 Results Assessment Form (RAF) ………………………………………………………………………….. 53 Template for Management Response (MR)………………………………………………………….. 54 Bibliography………………………………………………………………………………………………………………… 55

LIST OF TABLES/FIGURES/BOXES

TABLE 1: TABLE 2: TABLE 3: TABLE 4:

FIGURE 1: FIGURE 2: FIGURE 3: FIGURE 4:

BOX 1: BOX 2:

An overview of data collection methods and techniques …………………………………… 19 Example of an evaluation matrix (excerpt) …………………………………………………………… 27 Possible methods of data analysis ……………………………………………………………………….. 30 Different feedback formats and purposes ……………………………………………………………. 32

Three phases and fteen steps of the evaluation process ………………………………..13 The logical ow from data processing to data analysis ……………………………………..29 Logical ow from ndings, conclusions to recommendations …………………………..31 Key outputs along the evaluation process ……………………………………………………………37

Overview of the OECD/DAC evaluation criteria ………………………………………………….16 Example to illustrate steps 1 to 5 …………………………………………………………………………..22

PREAMBLE

It is now more than ten years ago that the Austrian Development Agency (ADA) rst developed guidelines on programme and project evaluations. After a decade of practice, we have now taken the opportunity to review the existing guidance.

The guidelines at hand have been inspired by our commitment to learn from the everyday lessons of our own work and pay tribute to recent developments within the global and national evaluation communities. We take a conscious stance to change ADA’s programme and project evaluation practice – away from requesting to evaluate each programme and project at least once in its cycle towards evaluating more selectively, purposefully and with a speci c focus on utilisation. This will allow us to both concentrate our resources where they are most useful and to enhance the quality of our evaluations and therefore, our development results. We are thereby also implementing speci c recommendations of the 2020 OECD/DAC Peer Review and the rst meta-evaluation of ADA programme and project evaluations completed in 2019.

The guidelines are based on international and national standards and good practice in the eld of development evaluation, and should be read together with the Evaluation Policy of the Austrian development cooperation and the Evaluation Criteria of the OECD/DAC. They set high standards for the quality and process of programme and project evaluations – we are convinced this is the way forward – and provide practical guidance and tools for their implementation.

Evaluation is a shared responsibility of everyone involved in the planning, implementation and monitoring of programmes and projects. This is why the new guidelines emphasise the need for consultations within ADA and with our partners: Starting from giving thought to what programmes and projects will most usefully be evaluated to earmarking the required resources, asking the right questions and choosing the most adequate evaluation approach and design. This is also important for creating a joint commitment to use evaluations ndings.

We will continue to remain committed to our institutional learning to ensure our work has the impact we aim for.

I invite you all to join us on this journey. Vienna, July 2020

Martin Ledolter Managing Director, ADA

5

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

LIST OF ABBREVIATIONS

ADA Austrian Development Agency ADC Austrian Development Cooperation

AFD Agence Française de Développement
ALNAP Active Learning Network for Accountability and Performance in Humanitarian Action

BMK Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology CBA Cost-Bene t Analysis
DAC Development Assistance Committee (of the OECD)

DeGEval DeGEval Evaluation Society
DMI International Third Party Funding (ADA Organisational Unit) EES European Evaluation Society

EGSIM Environmental, Gender and Social Impact Management (at ADA)
EPOL Development Communication and Education in Austria (ADA Organisational Sub-Unit)

  1. EQ  Evaluation Question

  2. ER  Evaluation Report

ERG Evaluation Reference Group

EU European Union
EVAL Evaluation Unit (ADA Organisational Sub-Unit)

FGDs Focus Group Discussions
FMS Funding Management System (at ADA) GCF Green Climate Fund

GIZ Deutsche Gesellschaft für Internationale Zusammenarbeit HRBA Human Rights Based Approach

HQ Headquarters

  1. IO  International Organisation

  2. IP  Implementing Partner

IR Inception Report KI Key Informant

L&R Countries & Regions (ADA Organisational Unit)
MFA Federal Ministry for European and International A airs

MR Management Response MSC Most Signi cant Change

MSF Médecins Sans Frontières

ODA O cial Development Assistance
OECD Organisation for Economic Co-operation and Development

PMT Project Management Team (ADA)
POM Project Operational Manual (for third-party funded projects implemented by ADA)

PP Programme and Project
PPI Programmes and Projects International (ADA Organisational Unit)

PPM Programme and Project Manager (ADA)
PRA Participatory Rural Appraisal
RAF Results Assessment Form
SDC Swiss Agency for Development and Cooperation

SEVAL Swiss Evaluation Society
T&Q Themes & Quality (ADA Organisational Unit)

ToR Terms of Reference
UNODC United Nations O ce on Drugs and Crime

VAT Value Added Tax
VEN Vienna Evaluation Network

VOPEs Voluntary Organizations for Professional Evaluation W&E Private Sector & Development (ADA Organisational Unit)

ZGI Civil Society International & Humanitarian Aid (ADA Organisational Unit)

6

I. INTRODUCTION

Welcome
to Module 1 of the Evaluation Policy of the Austrian development cooperation1: The Guidelines for

Programme and Project (PP) Evaluations at the Austrian Development Agency (ADA).2

These guidelines come at an important moment in time given the recent revision of the evaluation criteria along with relevant de nitions and principles for use by the OECD Development Assis- tance Committee (DAC)3 in December 2019 as well as the release of an interministerial Evaluation Policy by ve actors of the Austrian development cooperation4 in August of the same year.

The overall goal of these guidelines is to contribute to better PP evaluation and evaluation use and, therefore in the mid- to long-term, to better development results. To that end, they rst set out the key principles guiding programme and project evaluations at ADA (Chapter II). They then distinguish between di erent types of PP evaluation management and de ne respective roles and responsibilities (Chapter III). Finally, they o er practical guidance for each step within the evalua- tion process (Chapter IV). In addition, the guidelines provide tools and templates to help foster a joint understanding and approach to contribute to high quality programme and project evaluations at ADA (Annexes 2-10).

ADA sta and implementing partners (IP) at Headquarters (HQ) and in the eld5 are the main intended users of these guidelines. As such, the guidelines are aimed at assisting in the design, management, quality assurance and utilisation of evaluations of programmes and projects funded or implemented by ADA. They are also intended for use by external evaluators who need to un- derstand how PP evaluations are conducted in the context of the Austrian Development Cooper- ation (ADC) and its operational arm, the Austrian Development Agency, and by other interested audiences as well.

These guidelines are the result of an extensive review and consultation process and re ect learn- ings from the Meta-Evaluation of ADA Project and Programme Evaluations 2016-20186 as well as from ADA’s and IP’s evaluation practice over the past decade. They are based on international and national standards and principles as well as a review of available guidance from bi- and multi- lateral development actors. Consultations for the guidelines included key informant interviews,

a survey among key ADA sta and implementing partners at headquarters and in the eld, two workshops with a core group of ADA colleagues across organisational units at HQ and a nal feedback round with key ADA sta at HQ and in the eld. This participatory and learning-oriented approach to developing the guidelines was chosen to make them responsive to demand, user-friendly and practicable.

These guidelines are most useful if read together with the two following documents:

  1. The Evaluation Policy of the Austrian development cooperation, which de nes the overall quality standards, principles and de nitions of relevance to Austrian development evaluation, and

  2. The OECD/DAC evaluation criteria, standards and principles for use, which provide the recognised international framework for evaluation in development cooperation (and beyond).

Throughout the guidelines, links between the di erent chapters and annexes as well as to key external documents are provided for ease of reference and use.

Enjoy!

  1. 1  The term ‘Austrian development cooperation’ denotes the entirety of Austrian ODA actors and contents and therefore extends beyond ADC, i.e. MFA and ADA. The term ‘Austrian Development Cooperation’ (ADC)’ on the other hand, is used as an institutional term, comprising exclusively the two development actors MFA and ADA. See MFA 2019a:3.

  2. 2  These guidelines replace the Guidelines for Project and Programme Evaluations published by ADC in 2008. See ADC (2008).

  3. 3  OECD (2019)

  4. 4  MFA (2019a)

  5. 5  For ADA, this includes ADA sta in Coordination O ces and Project Management Teams (PMT).

  6. 6  ADA (2019a)

I. INTRODUCTION

7

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

II. GUIDING PRINCIPLES

Programme and project evaluations are de ned as the ‘[e]valuation of a single development mea- sure designed to attain speci c objectives with a pre-speci ed budget and a set plan of action (project evaluation) or an evaluation of a combination of measures put together to attain speci c development objectives at global, regional, national or sectoral levels (programme evaluation)”.7

The following principles underpin and inform relevant decision-making, thinking and practice with regards to programme and project evaluations at ADA.

1. Evaluate programmes and projects purposefully

Programmes and projects must be evaluated with a clear purpose in mind, rather than just as a matter of principle. Instead of aiming to evaluate every single intervention funded by ADA – the goal is to have the right things evaluated for the right reasons at the right time. ADA is committed to evaluate a minimum of 30 to 50 percent of its programmes and projects. To reach this ambi- tion, adequate re ection and decision-making on whether and when to evaluate need to occur within and across ADA’s organisational units, taking into account strategic considerations that go beyond individual programmes or projects.8 It is also important that ADA and its implementing partners jointly re ect on the purpose and use of an evaluation and plan accordingly from the outset of a programme or project. By default, all programmes and projects that are subject to approval by ADA’s Supervisory Board must be evaluated once in their programme/project cycle.9

The below criteria will help decision-making with regards to whether and when an evaluation may be advisable – or not. The list is not exhaustive and does not replace a consultative, contextual- ised re ection process. Budget and human resource issues need to be considered together with other factors when deciding whether and when to evaluate a particular programme or project.

1011

For guidance on scoping and framing the evaluation interest and use see Chapter IV, Step 1 (Frame the evaluation interest and use) of the guidelines.

Reasons supporting a decision to evaluate:

Reasons for refraining from an evaluation:

D Programmes and projects funded from ADA’s Small Project Fund (unless one or several criteria on the left are present)

D Contributions to funds or appeals, where ADA is only a small contributor (unless one or several criteria on the left are present)

D In case of ADA’s contributions to international organisations (IO), no separate evaluation is nec- essary if the IO conducts an evaluation. Instead, reference and use are made of the development partner’s evaluation ndings and recommendations (subsidiarity principle11)

D D

D D

D

D

8

There is a speci c knowledge interest for conduct- ing an evaluation10

Pilot programmes and projects or innovative approaches with a potential for replication or scaling-up

Programmes and projects are being considered for a subsequent phase

Programmes and projects address evidence gaps and demonstrate a high potential for learning (e.g. new approaches or themes, particularly successful or unsuccessful interventions)

Programmes and projects are of strategic impor- tance (e.g. strategic nature of partnership with im- plementing partner, strategic interest in area, theme, modality or other aspect of engagement)

High expected utilisation and usefulness of ndings

  1. 7  MFA 2019a:6

  2. 8  This could be done in the context of the annual work planning processes (Arbeitsfeldprogramm) at unit and/or departmental level or during team

    retreats. ADA’s thematic advisors and the EGSIM appraisal team should always be involved in this exercise.

  3. 9  These are programmes and framework contracts with a budget exceeding 3 million euro as well as projects with a budget exceeding 2 million euro.

    See ADA 2017:2

  4. 10  At times a speci c evaluation interest only develops during implementation, contingent on speci c events. In those cases, there should be exibility,

    also of budgetary nature, to reserve or re-allocate resources for evaluation within a programme or project budget.

  5. 11  MFA 2019a:8

2. Use the OECD/DAC evaluation criteria selectively and thoughtfully

Evaluation criteria must be used and selected thoughtfully for programme and project evaluations at ADA. This is in line with the Evaluation Policy12 and the adapted de nitions and principles for use of the OECD/DAC evaluation criteria13 and is intended to help ensure that speci c informa- tion needs are addressed in a particular context and time. It further builds on a nding of the Meta-Evaluation of ADA Project and Programme Evaluations 2016-2018, which highlights the importance of clearly connecting the purpose of an evaluation with selecting and prioritising the relevant evaluation criteria to ensure adequacy and feasibility. This approach is expected to help lay the ground for enhanced programme and project evaluation quality and use.14

For guidance on how to select evaluation criteria depending on the evaluation purpose and objectives and/or on how to use them as guiding frame for developing evaluation questions
see Chapter IV, Steps 2 (Detail purpose and objectives) and 3 (De ne key evaluation questions) of the guidelines.

3. Apply evaluative thinking throughout a programme and project cycle

Evaluative thinking – that is a conscious way of thinking with a lens on how programmes and projects that are being planned or implemented can be evaluated and contribute to learning – is key to making programmes and projects more evaluable and to enabling a meaningful and useful evaluation. Important measures to be taken during the planning phase of a programme and project may include the inclusion of a programme or project inception phase where baselines for key indicators are being established (which an evaluation can draw upon and assess progress against), the development of a Theory of Change/Logic Model to map out how the intervention
is expected to deliver the desired results and the development of rigorous processes and tools to collect monitoring data (which is the foundation for robust and evidence-based evaluation ndings). Moreover, the systematic analysis of monitoring data during programme and project implementation is critical for revealing trends and dynamics that may indicate the need for an evaluation or alternative review, learning or audit processes.

For guidance on how evaluative thinking and relevant considerations throughout the programme and project cycle feed into the design of an evaluation see Chapter IV, Step 1 (Frame the evaluation interest and use) and Step 4 (Outline evaluation design and approach) of the guidelines.

4.Carefully balance scope, budget and time

Striking the right balance between what we really want to know (scope) with which resources (budget) and when along the project’s or programme’s life cycle (time) is key to ensuring high-quality evaluations. All those aspects are closely interlinked and interdepen- dent: The scope must be realistic in terms of available resources, i.e. time and budget. The evaluation ques- tions (what we want to know) determine what budget
is needed. The timing and timeliness of an evaluation are crucial to ensure uptake and use of ndings. De- cision-making with regards to timing and time further need to take into consideration the availability of stake- holders of an evaluation (e.g. farmers may not be able to support the evaluation during the harvest period).

  1. 12  See MFA 2019a:9

  2. 13  See OECD (2019)

  3. 14  See ADA (2019a)

II. GUIDING PRINCIPLES

SCOPE

BUDGET

TIME

9

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

The Evaluation Policy recommends earmarking, where relevant, at least 3 percent of the respec- tive programme or project budget for an evaluation.15 For smaller programmes and projects16 it
is further recommended that at least 25,000 euro are earmarked for an evaluation. Establishing
a budgetary oor is important given ADA’s experience and the ndings of the Meta-evaluation, which show that conducting an evaluation below a certain budgetary threshold is both unrealistic and compromising of quality. Programmes and projects that cannot meet this oor may want to consider alternative review and learning processes.17 For larger programmes and projects exceed- ing 3 million euro, a budgetary ceiling of 90,000 euro is recommended for evaluations.

For guidance on how to balance scope, budget and time of an evaluation see Chapter IV, Step 1 (Frame the evaluation interest and use), Step 4 (Outline evaluation design and approach) and Step 5 (Estimate the budget) of the guidelines. Annex 11 (Bibliography) points to further reading on alterna- tive review and learning processes.

5. Use and learn from evaluations18

Evaluations must have a clearly de ned intended use (see principles 1 and 2). Yet this alone is not su cient. In order to contribute to better development results, evaluations must be used. The preconditions to promote learning and subsequent use of evaluation ndings are being set during the design phase of an evaluation, starting with scoping and developing the evaluation questions. Equally, evaluation use and learning from evaluation are directly linked to the quality and credi- bility of an evaluation and its ndings, conclusions and recommendations as well as the evalua- tion process itself. All of this underscores the need for solid evaluation design and application of standards and principles for good evaluation, including independence.19 The Evaluation Policy de nes preconditions and tools to help foster evaluation use and learning throughout the evalu- ation process and beyond, including the establishment of a reference group, the publication and dissemination of the evaluation report, the participatory development and timely implementation of a management response and the regular monitoring thereof.20

The use and learning from evaluation also require a conducive environment and a learning cul- ture within organisations – a learning culture and related processes that go beyond the use and learning from individual evaluations. It is therefore important to share ndings and recommenda- tions beyond the primary users of an evaluation and to proactively engage di erent stakeholders, organisational units and senior management in institutional learning processes. This is necessary for the evaluation to bring about changes at a broader level and to a larger scale.

For guidance on evaluation use see Chapter IV, Step 1 (Frame the evaluation interest and use), Step 14 (Disseminate evaluation ndings) and Step 15 (Coordinate management response and follow-up).

  1. 15  MFA 2019a

  2. 16  De ned as projects or programmes with a budget of approximately 833,000 euro or below, where application of the above percentage (3 percent)

    would result in a limited evaluation budget.

  3. 17  See ADA (2019a)

  4. 18  These can include lessons learned exercises at mid or end of programme or project, in person or virtually, learning workshops or a synthesis of already

    completed evaluations to learn from their ndings and recommendations. See Chambers, R. (2002) and Ramalingam, B. (2006) in the bibliography (see

    Annex 11), which o ers further reading on such learning formats.

  5. 19  MFA 2019a:8 und MFA 2019:11

  6. 20  MFA 2019a:8 and MFA 2019:13

10

III. TYPES OF EVALUATION MANAGEMENT AND ROLES AND RESPONSIBILITIES

In alignment with the Evaluation Policy, evaluations of programmes and projects that are funded or implemented by ADA are generally conducted by (an) external, independent evaluator(s).21 The role of ADA sta and implementing partners (IP) is therefore primarily concerned with evaluation management and quality assurance as detailed below.

1. Types of evaluation management: Partner-led and ADA-led

ADA distinguishes between two types of management of programme and project (PP) evaluation:

  1. Partner-led evaluations: Evaluation management lies with ADA’s implementing partner. This is the case when a programme or project is funded by ADA and implemented by an IP through a grant agreement22 (Förderungen), with the planned evaluation budget included in the pro- gramme or project budget.23

  2. ADA-led evaluations: Evaluation management lies with ADA. This is the case when:

    1. A programme or project is funded by ADA and implemented by an IP through a service contract (Aufträge) or where ADA commissions a programme or project evaluation outside the framework of the programme or project budget. In both cases, ADA has an overarching interest to manage and steer the evaluation.24 In these cases, evaluation management lies with the programme manager or advisor who is responsible for the programme or project at ADA HQ.25

    2. A programme or project is (co-)funded by a third party26 and implemented by ADA. In this case, evaluation management generally lies with the relevant Project Management Team (PMT) in the eld.27 Evaluation management can also lie with the Department Programmes and Projects International (PPI) at ADA HQ and/or with the ADC Coordination O ce in the eld when there is a particular overarching interest to manage and steer the evaluation.28

2. Roles and responsibilities

Regardless of where evaluation management lies, decision-making with regards to whether and when a programme or project will be evaluated as well as the scoping, purpose and objective(s) of an evaluation follows a consultative process between ADA and its implementing partners and/ or other national and bi- and multilateral donors as applicable. Please refer to Annex 1 for a work ow illustrating the relevant processes, roles and responsibilities as well as the support op- tions available to managers of partner- and ADA-led programme and project evaluations. In what follows, a brief summary is provided of the main lead and support roles when it comes to the evaluation of programmes and projects funded or implemented by ADA: The Evaluation Manager, the ADA Programme and Project Manager and ADA’s Evaluation Unit.

  1. 21  MFA 2019a:11

  2. 22  This includes, for example, individual grants, strategic partnerships and grants awarded through calls.

  3. 23  Some programmes or projects implemented by an IP may be co-funded by a third party (for example by the Swiss Agency for Development and Coop-

    eration/SDC). In this case, ADA is co-funding the programme or project and not implementing it as in the case of 2 (B).

  4. 24  Such an interest may arise, for example, when the evaluation purpose is to assess the relevance and e ectiveness of ADA’s sectoral engagement

    across di erent programmes and projects implemented by IPs.

  5. 25  Concretely, these are programme managers or advisors in the organisational units L&R, T&Q, W&E, ZGI or EPOL, who are assigned the responsibility

    for a given programme or project in ADA’s funding management system (FMS) (Sachbearbeiter/in des Projekts).

  6. 26  A third party may be a multilateral organisation (e.g the European Union) or fund (e.g. the Green Climate fund), a bilateral donor (e.g. Finland) or an

    Austrian actor (e.g. BMK or Bundesland Vorarlberg).

  7. 27  In the context of international third-party funded projects, the procurement of services (including evaluation services) is regulated by the Project Opera-

    tional Manual (POM) (ADA 2019b:37-38). Procurement and evaluation management are therefore not necessarily led by the same person. This concerns

    mainly Step 7 in the evaluation process (see Chapter IV).

  8. 28  Such an interest may arise, for example, when the evaluation purpose it to promote learning and assess broader aspects of relevance, e ectiveness and coherence and across di erent third-party funded programmes or projects implemented by ADA.

III. TYPES OF EVALUATION MANAGEMENT AND ROLES AND RESPONSIBILITIES

11

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

Evaluation Manager

The evaluation manager plays a crucial role in steering and coordinating the evaluation process and in upholding the principles and standards for good development evaluation set out in the Evaluation Policy and the Guidelines for Programme and Project Evaluations throughout. The evaluation manager also safeguards the quality and timeliness of programme and project evalua- tions, their ndings and recommendations. In the case of partner-led evaluations it is the respon- sibility of the evaluation manager to contact the ADA Programme and Project Manager for approv- al of the draft Terms of Reference (ToR), inception report (IR) and evaluation report (ER). Chapter IV details the speci c tasks of the evaluation manager for each step in the evaluation process.

ADA Programme and Project Manager

The term ADA Programme and Project Manager (PPM) refers to ADA sta responsible for a particular programme or project in ADA’s Funding Management System (FMS).29 The ADA PPM accounts for the quality and timely implementation of ADA-funded or implemented programmes and projects – including their evaluations. The approval of evaluation ToR, IR and the nal ER always lies with the ADA PPM, whether for partner-led or ADA-led evaluations. Prior to approval, the PPM needs to ensure that other organisational units are consulted as relevant.30 In particular, the PPM should always inform and involve ADA’s thematic advisors at HQ and in Coordination O ces as well as ADA’s EGSIM appraisal team in the review and feedback process. In the case of partner-led evaluations, the PPM acts as interface between the IP and relevant organisational units involved in the evaluation process at ADA HQ and in the eld (see Annex 1 and Chapter IV, Steps 1 to 15).

ADA’s Evaluation Unit

ADA’s Evaluation Unit (EVAL)31 based in Vienna provides technical advice and quality assurance support to programme and project evaluations when contacted by the ADA PPM. In line with ADA’s Public Disclosure Policy32, the executive summaries of programme and project evaluations are published for all programmes with a budget exceeding 3 million euro, and for all projects with a bugdet exceeding 2 million euro.33 These evaluations should always be reviewed and quality assured by ADA’s Evaluation Unit. It is the responsibility of the ADA PPM to make timely contact with ADA EVAL to request support.

12

  1. 29  For the purpose of these guidelines, the term ADA PPM generally means the ADA sta responsible for the programme or project in ADA’s funding man- agement system FMS (Sachbearbeiter/in des Projekts). In the case of third-party funding where ADA sta responsible for third-party funded projects
    in the FMS are themselves funded with those funds, the Head of Unit International Third Party Funding (DMI) assumes the PPM function to warrant necessary independence.

  2. 30  In case where the Head of DMI functions as PPM, this coordination task is delegated to the ADA sta responsible for the programme or project in ADA’s funding management system FMS (Sachbearbeiter/in des Projekts).

  3. 31  The mandate of ADA’s evaluation unit includes the steering of strategic evaluations in line with ADC’s bi-annual evaluation plan, the coordination and monitoring of management responses to strategic evaluations, support to programme and project evaluations, contributions to the continuous develop- ment of ADC’s evaluation system, and represent ADC together with the MFA as competence centre on evaluation in international networks.

  4. 32  ADA (2018a)

  5. 33  This applies to all programmes and projects approved after 3rd July 2018.

IV. THE EVALUATION PROCESS IN 15 STEPS

This chapter is organised to re ect a typical evaluation process. It leads the reader through the operational steps from taking the decision to do an evaluation all the way to the utilisation of evaluation ndings – providing guidance and tips along the journey.

Each evaluation process has three overall phases:

1. Design, 2. Implementation and 3. Utilisation.

The guidelines identify 15 steps along the evaluation process and provide guidance for solid evaluation man- agement and quality assurance throughout. Together and followed with care, they can contribute towards solid and useful evaluation management and results which, in turn, can help inform better programmes and projects and ulti- mately, better development results.

Programme and project evaluations play a crucial role in assessing the extent to which ADA-funded or implemented projects and programmes adhere to: 1. ADA’s basic princi- ples and quality criteria for programme and project design34 including equity, participation and empowerment, 2. Its human rights based approach (HRBA) to development and 3. The cross-cutting issues governing ADA’s work such as environment and climate change and gender equality.35 De- pending on the context of the programme or project being evaluated, other approaches, such as the con ict-sensitive approach, may be relevant as well. These issues should be considered and applied within each phase of the evaluation process and be followed through as an integral key part thereof. When and how to best integrate these issues with- in the design, implementation and utilisation of programme and project evaluations is highlighted within the relevant steps below. Relevant tools are provided as annexes.

IV. THE EVALUATION PROCESS IN 15 STEPS

Figure 1: Three phases and fteen steps of the evaluation process

DESIGN

  1. 1  Frame the evaluation interest and use

  2. 2  Detail purpose and objectives

  3. 3  De ne key evaluation questions

  4. 4  Outline evaluation design and approach

  5. 5  Estimate the budget

  6. 6  Develop Terms of

    Reference TOR

  7. 7  Select and commission

    the evaluator(s)

    EVALUATION MANAGER

    (EITHER IP OR ADA PPM)

IMPLEMENTATION

INCEPTION

8 Kick-off and clari cation meeting

9 The evaluation matrix 10 The inception report

INQUIRY

11 Data collection and analysis SYNTHESIS

UTILISATION

14 Disseminate evaluation ndings

15 Coordinate management response and follow-up

PLANNING, PREPARING AND COMMISSIONING THE EVALUATION

WORKING WITH THE FINDINGS

12 13

Findings, conclusions and recommendations

The evaluation report

EVALUATOR(S)

EVALUATION MANAGER

(EITHER IP OR ADA PPM)

IN CASE OF PARTNER-LED EVALUATION:

ADA PPM (+ ADA EVAL UPON CONTACT)

  1. 34  Thereinafter referred to as ADA’s basic principles. They include: Ownership, do no harm, equity, equality and non-discrimination, inclusive participation and equal representation of all stakeholders, accountability and transparency, empowerment, sustainability (ADA 2018b:4-6).

  2. 35  For details on ADA’s basic principles as well as cross-cutting issues and the HRBA see ADA 2018b:4-6.

13

BETTER DEVELOPMENT RESULTS

SUPPORT LEAD

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

PLANNING, PREPARING AND COMMISSIONING THE EVALUATION

STEP 1

Frame the evaluation interest and use

The process of framing the evaluation interest – or scoping an evaluation – is exploratory. It serves to de ne the overall direction before thinking of evaluation criteria or evaluation questions. It begins with asking the questions whether an evaluation would be useful and feasible (see Chapter II, principle 1). This re ec- tion lays the ground and sets the overall parameters for the evaluation process as well as the framing of the evaluation interest and use. It further helps re ect on what is really meaningful within the speci c context of a programme or proj- ect: What do we really want to learn from the evaluation? And what not?

Evaluation managers and ADA programme and project managers are encour- aged to deliberately exclude aspects that are of limited or no interest. This will help focus the evaluation and increase clarity, feasibility and ultimately, use. As a rst step, it is therefore recommended to de ne what should be within and outside the scope of an evaluation – in a so-called scoping exercise jointly with key stakeholders and following a consultative process (see Annex 2). This thinking needs to be made explicit and further detailed later when developing the relevant sections of the Terms of Reference (see Step 6).

Aspects to consider when scoping an evaluation include:

  • Geographical aspects: Which regions, countries, areas, districts, target com-

    munities should be part of the evaluation? Which ones not?

  • Time-related aspects: What period do you want to consider for this evalua- tion? The current programme or project cycle, multiple cycles, or only a speci c time period within a cycle?

  • Thematic/structural aspects: Do you want to look at the entire programme or project or only at selected components?

  • Evaluability aspects: Is enough data available and are key informants accessi- ble to enable solid data collection and evidence generation?

    The timing and timeliness of an evaluation are important elements to consid-
    er when framing the evaluation interest and use. In terms of timing, a broad distinction can be made between mid-term and end-term evaluations.36 Broadly speaking, mid-term evaluations intend to inform decision-making with regards to project or programme implementation to maximise the potential for achieving intended results. End-term evaluations are generally conducted to assess how and why results were achieved (or not), in order to inform decision-making with regards to programme and project continuation. Closely related to timing is the timeliness of an evaluation. When do evaluation ndings and recommendations need to be available so that they can be put to e ective use? This is largely determined by the needs of evaluation users and relevant decision-making processes. It is also important to consider: How quickly can ndings be made available and communicated in order to be available at the time when they are needed for decision-making?

36 Other types of evaluations that are typically implemented outside the framework of a speci c programme or project or alongside its implementation are ex-ante/ex-post evaluations and real-time/developmental evaluations (see MFA 2019a).

KEY POINTS IN BRIEF

> Decide whether an evaluation interest exists, what it consists of and what use the evaluation will serve.

> Re ect on what is inside and out- side the scope of an evaluation.

> Make sure the timing of an evaluation corresponds with the information needs of the intended users.

TIP

Make sure that cross-cutting issues and basic principles are considered during scoping.

14

PLANNING, PREPARING AND COMMISSIONING THE EVALUATION

STEP 2

Detail purpose and objectives

Every evaluation must have a clearly de ned purpose to be of practical use. That is, it must meet the information needs of its intended users. If the purpose is unclear, there is a risk that the evaluation will focus on the wrong issues, draw invalid conclusions and provide recommendations that are neither useful nor used.

Evaluation in Austrian development cooperation performs three intercon- nected functions which serve as a guiding frame for de ning the evaluation purpose:

1. A learning function to understand why particular development interventions have worked or not;

2. A steering function to supply credible and reliable ndings for evidence-based decision-making at strategic and operational levels; and

3. An accountability and communication function to give account of the use of public funds and corresponding results achieved to partners, donors and the Austrian public at large.37

Even though in practice evaluations may serve multiple purposes, deciding on the main purpose is important in order not to lose focus of the evaluation.38 To de ne the purpose of the evaluation, the evaluation manager is encouraged
to start with asking the following questions: Why and for whom? Why is the evaluation being undertaken now? Who is asking for it? What are the intended bene ts of the evaluation and for whom? If the evaluation mainly aims to meet an accountability purpose, it is important to de ne accountability to whom and for what. If the main purpose of the evaluation is geared towards learning, it is important to be clear about learning by whom and how this learning is sup- posed to happen.

The objectives logically follow from the purpose and provide more details on what the evaluation seeks to accomplish and how the results will be used to ben- e t the programme or project, other interventions or the organisation at large (see Box 2 for an example). Sometimes evaluation objectives are formulated using
the OECD/DAC evaluation criteria (see Step 3). It is recommended to formulate between one and three objectives, yet there is exibility in this number. Evalua- tion should be objective driven rather than driven by methods or methodological considerations. Only when the objectives are clearly formulated does it become possible to determine the most suitable approach and methodology.

As the purpose and objectives are identi ed, it is important to specify the intend- ed users of the evaluation in order to ensure that their information needs and expectations are met. Intended users may include ADA sta and/or implementing partners, coordinators or other sta at HQ and/or in the eld, or stakeholders from similar programmes or projects. Their main interest from an evaluation may be to gain knowledge and insights into results achievement to help choose more e ective implementation strategies. Likewise, decision-makers who oversee projects or programmes such as senior management, policy makers or donors very likely require evaluation ndings to decide whether to continue, modify, or discontinue a programme or project. It is therefore critical to consult the intended users when de ning the purpose and objectives of an evaluation.

  1. 37  See ADC 2019:5

  2. 38  That said, all evaluations, whether primarily for accountability, learning or steering, are learning opportunities.

IV. THE EVALUATION PROCESS IN 15 STEPS

KEY POINTS IN BRIEF

> Clarify the main purpose of the evaluation (learning, steering or accountability).

> The objectives logically follow from the purpose and provide more details on what the evalua- tion seeks to accomplish.

> Specify the intended users of the evaluation.

TIP

Make sure that the main purpose and objectives really re ect the needs and interests of the main users of the evaluation results.

15

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

PLANNING, PREPARING AND COMMISSIONING THE EVALUATION

STEP 3

De ne key evaluation questions

As a next step, the evaluation manger translates the main purpose, objectives and scope of an evaluation into speci c evaluation questions (EQ). The evalua- tion questions together drive the entire evaluation: They determine the evalua- tion design, including methodological approach and methods of data collection and analysis used, as well as the evaluation budget needed.

All evaluations of programmes and projects funded or implemented by ADA need to refer to and use the OECD/DAC evaluation criteria as the guiding frame for developing the evaluation questions (see Box 1). It is important to apply cri- teria thoughtfully and selectively, depending on the particular purpose and ob- jectives of an evaluation (see Chapter II, principle 1). Data availability, budgets, timing and methodological considerations may also inform how (and whether) a particular evaluation criterion shall be covered.39

Box 1: Overview of the OECD/DAC evaluation criteria

KEY POINTS IN BRIEF

> Spend su cient time on the thoughtful formulation of evalua- tion questions, using the OECD/ DAC evaluation criteria as a guid- ing frame.

> Make sure that each evaluation question can be answered with the available time and resources and generate useful information to inform decision-making.

The OECD/DAC evaluation criteria40

The six OECD/DAC evaluation criteria – relevance, coherence, e ective- ness, e ciency, impact and sustainability – each provide a di erent lens through which a programme or project can be viewed. Together they pro- vide a comprehensive and holistic picture of a programme or project, the process of implementation (how change happens) and the results (what changed). The coherence criterion was added in December 2019 to better capture synergies, linkages, partnership dynamics, and complexity.41

To illustrate the content of each criterion and to better understand how to use the criteria when developing the evaluation questions, a simple question for each
criterion can be kept in mind:

Relevance: Is the intervention doing the right things? Coherence: How well does the intervention t? E ectiveness: Is the intervention achieving its objectives? E ciency: How well are resources used?

Impact: What di erence is the intervention making? Sustainability: Will the bene ts last?

Evaluation questions should be clear and well-grounded in the purpose, objec- tives and scope of an evaluation. There is no standard rule as to the number
of evaluation questions, which is highly context-speci c. Yet there are trade- o s between the breadth and depth of an evaluation: The more evaluation questions, the less depth in analysis – and vice versa. When developing the evaluation questions, it is important to consider the feasibility for evaluation questions to be answered accurately within the scope, timeframe and budget of an evaluation as well as the data that are available and accessible.42 Can the

  1. 39  OECD 2020:5

  2. 40  OECD 2020

  3. 41  OECD 2020:3

  4. 42  There is however general consensus among the development evaluation community not to formulate more than two to

    three questions per criterion, as having too many may result in the evaluation losing focus. See for example, ALNAP (2016) or UNODC (2017).

16

IV. THE EVALUATION PROCESS IN 15 STEPS

PLANNING, PREPARING AND COMMISSIONING THE EVALUATION

TIP

Rather than formulating spe- ci c evaluation questions for cross-cutting issues and basic principles, (implicitly) include them in every question.

STEP 4

questions be answered by drawing on more than one source of information in order to allow for triangulation? Careful consideration should also be given to involving relevant stakeholders in the formulation process. In addition to the intended users of an evaluation this may include the partner government or civil society. This will help to ensure that the information gained from each EQ is of high importance to stakeholders and generate interesting ndings that are likely to be used.

Here is some practical guidance for developing good evaluation questions:

  • Formulate open-ended questions that commonly start with to what extent, to what degree, how well or how (e.g. To what extent did the intervention have an e ect on institutional change?)

  • Use analytical (why?) rather than descriptive (what?) questions

  • Use action-oriented questions, as they strongly focus on how ndings will ac- tually be used (e.g. How could we better support the inclusion of marginalised youth in Village Saving Groups?)

  • Formulate questions with a single focus
    (not comprising several questions in one questions)

  • Avoid questions that already implicate part of the analysis and response Outline evaluation design and approach

    Once the evaluation questions are developed, the evaluation manager de nes the evaluation design and approach through which these questions will be43 answered. For the purpose of these guidelines, design is understood as the overall strategy chosen for assessing and analysing change; while approach
    is understood as the methodological approach including the selection of data collection and analysis methods. There is no single right or best evaluation de- sign or approach, which needs to be tailored to the speci c evaluation purpose, objectives and questions. It is also important to consider the political and social context of a programme or project being evaluated as well as the available budget and timeframe. Identifying the best possible evaluation design and approach also requires balancing what is best and what is feasible.44 Finally, robust methods and data collection tools as well as triangulation are precon- ditions for obtaining solid and reliable data, which, in turn, are the basis for deriving credible and useful evaluation ndings.45

    At this point in the evaluation process it is neither necessary nor possible to develop a fully detailed evaluation methodology and set of data collection methods as this will be done by the evaluator(s) during the inception phase (see Step 9). What is necessary at this point is to indicate aspects of design that are perceived as suitable and/or necessary for answering the evaluation questions. Which design will best help structure the data collection and analysis process for answering the evaluation questions and ful l the purpose of the evaluation?

  1. 43  As opposed to closed-ended questions – those questions that can be answered by a simple ‘yes’ or ‘no’ – as they are usually very limited in their scope and the analysis and answer that they require.

  2. 44  UNODC 2017:130

  3. 45  Triangulation means using multiple approaches, methods and sources for data collection and analysis to verify and sub-

    stantiate information. This helps overcome the bias that comes from single informants, methods, observations or perspec- tives. Validity refers to the accuracy and relevance of data, i.e. how accurately a method measures what it is intended to measure. Reliability to consistency in results using the same method, i.e. how consistently a method measures something (UNODC, 2017:132). The concepts of triangulation, validity and reliability are not speci c to evaluations. They are concepts and quality standards pertaining to research. See for example Pierce (2008:79-99).

KEY POINTS IN BRIEF

> Re ect on which evaluation design and approach – in terms of overall strategy and methodological ap- proach, including selection of data collection and analysis methods

– is best suited for the evaluation, taking into account its purpose, objectives and context as well as available resources and timeframe.

> Keep in mind the importance of triangulation of data, sources and methods to promote credibility and use of evaulation ndings.

TIP

Ensure the suggested approach and methods are human rights based, gender-sensitive and inclusive.

17

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

PLANNING, PREPARING AND COMMISSIONING THE EVALUATION

In terms of evaluation design, a common distinction is made between exper- imental, quasi-experimental and non-experimental design.46 They all aim to assess the causal links between the programme or project intervention and observed changes (causal attribution), but using di erent approaches. An experimental and quasi-experimental design uses a counterfactual approach47 to explain causality, while a non-experimental design identi es patterns that would be consistent with a causal relationship, which is usually grounded in a well-developed Theory of Change, and then seeks con rming and discon rming evidence.

  • Experimental design: Involves the random selection of a group to an intervention (intervention group) and non-intervention (control group), pre- and post-measurement of each group and a comparison of the two.

  • Quasi-experimental design: Uses a comparison where the group to an inter- vention is not randomly selected (comparison group) and attempts to take into account the challenges of doing a true experiment in real life.

  • Non-experimental design: Considers the extent to which change has oc- curred only for those a ected by the programme or project without using a comparison between assisted and non-assisted groups.

    In terms of methodological approach, a common distinction is made between a qualitative, quantitative or mixed-methods approach depending on the meth- ods – that is the tools, techniques and processes – used to collect and analyse data.48

  • Quantitative approaches and methods: They measure and assess what can be studied with numbers. They answer the ‘what’ questions. Quantitative meth- ods use structured approaches that provide precise data that can be statistically analysed.

  • Qualitative approaches and methods: They analyse and explain what can be studied with words. Qualitative methods use semi-structured techniques to provide data than can provide an in-depth understanding of attitudes, percep- tions and behaviours.

    For the evaluation of programmes and projects funded or implemented by ADA, the use of a mix of methods is recommended to increase the variety of infor- mation and insights, and to allow for method and data triangulation in order to enhance the reliability and credibility of ndings.

    Again, focus at this point is on identifying data collection methods that are considered the most realistic and useful in a particular evaluation context, rather than listing the entire range of possible methods. Detailed information as to what methods will be used for answering the evaluation questions will be developed by the external evaluator(s) during inception and illustrated in the evaluation matrix (see Step 9).

    Table 1 provides an overview of some commonly used qualitative and quanti- tative data collection methods and techniques used in programme and project evaluations.49 Di erent options and methods for data analysis will be described in the context of data collection and analysis during inquiry (see Step 11).

    1. 46  UNODC 2017:130. As a general rule, experimental and quasi-experimental designs tend to be costlier and more time-in- tensive for preparation and implementation than non-experimental designs.

    2. 47  A counterfactual is a comparison between what has actually happened because of a project or programme, and what would have happened in its absence (see Rogers, 2014).

    3. 48  See for instance Cresswell (2014) and UNODC (2017:133).

    4. 49  Adapted from UNODC (2017:136-137). These methods refer to both, primary and secondary types of data and are listed in alphabetical order and not in order of relevance.

TIP

Encourage the selection of a varied mix of methods to get a richer set of data.

18

PLANNING, PREPARING AND COMMISSIONING THE EVALUATION

Table 1: An overview of data collection methods and techniques

IV. THE EVALUATION PROCESS IN 15 STEPS

DATA COLLECTION METHOD

BRIEF DESCRIPTION OF THE METHOD

CASE STUDY

A detailed description of a limited number of observations (e.g. a commu- nity, project, time period, etc.). They are particularly useful for evaluating complex situations and exploring qualitative impact.

DOCUMENT REVIEW

This includes secondary data sources for a better contextual understanding and/or collecting baseline data, as well as a review of internal and external documents.

FOCUS GROUP DISCUSSIONS (FGDs)

A discussion undertaken with a small group of participants (preferably fewer than 12) to obtain perspectives and beliefs relevant to the issue being examined. In contrast to group interviews, the aim of FGDs is for partic- ipants to discuss and debate issues with the facilitator taking the role as guide, observer and recorder.

INTERVIEWS

A standard method that can be conducted on an individual or group basis. The most common interview types are 1. Structured (following a prede ned set of questions) and 2. Semi-structured (containing a exible interview guidelines that allows for more in-depth responses to questions) interviews.

KEY INFORMANT INTERVIEWS

Done with people selected because they have speci c or specialised infor- mation about a particular topic. Interviews typically follow an open-ended format.

MOST SIGNIFICANT CHANGE (MSC50)

A participatory technique whereby participants are asked to describe the most important change that has happened from their perspective as a result of the project or programme. It is commonly used for assessing the impact, and can be applied when no baseline data or indicators exist.

OBSERVATION

Generally involves spending considerable time observing events, processes or people as they go about their typical activities, and recording these. It can be distinguished between participant (when the evaluator interacts as participant) and non-participant (when the evaluator is purely an observer) observation.

PARTICIPATORY RURAL APPRAISAL (PRA51)

It is toolbox that contains a wide range of simple methods and tools to engage communities in an evaluation and generate open discussion. The speci c tools may be di erentiated between space-related (e.g. social and resource maps, transects), time-related (e.g. timelines, trend analysis, sea- sonal diagrams, etc.) and relation methods (e.g. ranking, scoring, network diagrams, etc.).

SURVEY

A set of questions designed to systematically collect information from a de ned population usually by means of interviews or questionnaires ad- ministered to a sample of people representative of the target population. A survey can be self-administered meaning that it is completed by the respondent, or enumerated, which requires a trained data collector for its administration.

  1. 50  MSC is not merely a data collection method, but rather a process that involves the collection of signi cant change stories emanating from the eld level and the systematic selection and analysis of these (see Davies & Dart, 2005).

  2. 51  PRA is not merely a data collection method but rather a process that involves data collection, analysis and conclusion by the community (see Kumar, 2002).

19

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

PLANNING, PREPARING AND COMMISSIONING THE EVALUATION

Deciding on the adequate mix of data collection methods and developing the tools and instruments needed for actual data collection is complex and time consuming. For example, a good survey re- quires careful selection of survey participants and a balanced mix of closed-ended and open-ended questions. Similarly, interviews and focus group discussions require careful preparation of interview/ discussion guides taking into account the sequencing and wording of questions. This needs to be taken into account and budgeted for when designing an evaluation (see Step 5).

It is also important to allow su cient time for data collection in the eld. Oftentimes too many interviews and/or focus group discussions are scheduled during (often very condensed) eld trips, ultimately compromising quality. Practical guidance52 and experience suggests the following:

Key informant interviews:

  • Two weeks eldwork may include 25 to 50 key informant interviews

  • Typically, no more than four or ve interviews per day

  • Theoretical saturation53 occurs at six to twelve interviews of a particular type

    Group interviews:

  • Two weeks eldwork may include ve to twenty group interviews

  • Typically, no more than two or three per day

    Focus group discussions:

  • Two weeks eldwork normally include approximately ten focus group discussions

  • Typically, no more than one or two per day

20

  1. 52  See ALNAP (2016)

  2. 53  Saturation takes place when further interviews yield no more new information.

PLANNING, PREPARING AND COMMISSIONING THE EVALUATION

IV. THE EVALUATION PROCESS IN 15 STEPS

STEP 5

Estimate the budget

Determining an adequate evaluation budget depends on the purpose, objec- tives, scope, design and approach suggested for an evaluation. It also depends on the thematic and/or methodological evaluation expertise needed as well
as the expectations with regards to eld and other travel and related logistical arrangements throughout the evaluation process.

These guidelines recommend earmarking at least 3 percent of the programme or project budget for an evaluation. At the same time, a budgetary oor of 25,000 euro and a budgetary ceiling of 90,000 euro are recommended for the evaluation of a programme or project funded or implemented by ADA. (See Chapter II, principle 4). This is in recognition of the fact that conducting an evaluation below a certain budgetary threshold is both unrealistic and compro- mising of quality.

While a provisional budget estimate for an evaluation is usually earmarked within the respective programme or project budget, the evaluation manager needs to develop a detailed evaluation budget once a decision is made to evaluate. The single largest cost of any evaluation is the fees of the external evaluator(s). To ensure quality, it is crucial to have a realistic and adequate cost estimate. Box 2 provides guidance on how to calculate a detailed cost estimate for external evaluator(s) based on a concrete project example. This example is illustrative and relevant cost estimates provided need to be adjusted for each evaluation as necessary, depending on both the size and characteristics of

a particular programme or project being evaluated (in terms of geographical location, language, safety etc.), and the purpose, objectives and scope of the evaluation as well as the choice of evaluation design and methods.

Besides budgeting for external evaluator(s), it is also important to account for the time investment needed by the evaluation manager in commissioning and managing the evaluation. Su cient time (and budget, where relevant) should therefore be allocated for debrie ngs, engaging stakeholders and the evalu- ation reference group (as applicable) and for reading and commenting on the inception, draft and nal evaluation reports.

Finally, the utilisation phase of an evaluation needs to be budgeted for – both in terms of nancial and human resources. This may include the translation of the evaluation report or the development of evaluation briefs and other communi- cation tools to promote the dissemination and use of evaluation ndings. These costs need to be kept in mind and factored in by the evaluation manager when determining whether to conduct an evaluation and what resources to plan for. In case the evaluator(s) play a role in dissemination, this must be re ected in ToR (Step 6). Developing a (costed) communication plan (see Step 14) is a good practice.

KEY POINTS IN BRIEF

> Ensure that the evaluation budget is based on a realistic estimate of the workload needed to conduct a solid evaluation within a given context.

> Allocate su cient resources to evaluation design, data collection and analysis as well as (inception and evaluation) report writing and allow some exibility by including a budget reserve.

21

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

PLANNING, PREPARING AND COMMISSIONING THE EVALUATION

Box 2: Example to illustrate steps 1 to 5

Project title: Suhareka/Suva Reka Smallholder Initiative – Local Development Fund54 Project budget: 510,000 euro
Implementation period: 24 months

Project background:

The overall objective of the project is to promote inclusive and sustainable economic development and job creation in the municipality of Suhareka. This will be achieved through the following speci c objectives:

  1. Capacity development of existing farmer cooperatives and the Municipal Development Centre;

  2. Income diversi cation and resource strengthening of marginalised smallholders; and

  3. Strengthening of farmer associations in the areas of improved production, marketing and value chain optimisation.

The end-term evaluation aims at the following:

Purpose: To provide an assessment of the overall project progress and results against the objectives and indicators of achievement as mandated by the donor ADA and stipulated in the project document (Accountability-oriented focus).

Objectives:

  1. To determine the extent to which the household economies of marginalised smallholder farmers, and female-headed households in particular, have improved.

  2. To assess the individual and organisational skills development of farmer associations in the areas of production, marketing and value chain optimisation.

  3. To identify recommendations for future activities, with a particular focus on further economic skills development interventions.

Intended users:

Primary users: Project stakeholders, in particular the Local Development Fund (lead partner), Municipal Development Centre (local implementing partner), ADA and other co-donors.

Secondary users: Policy-makers and programme designers and implementers of other organisations that engage in smallholder strengthening through income generation.

Scope: The evaluation will cover activities that have taken place since the beginning of the project until the time of the evaluation.

Timing: The evaluation will take place between months 18-22 of project implementation (to have su cient time to develop needs-based future activities in the area economic skills development).

Possible evaluation criteria and key evaluation questions:

E ectiveness:

1. To what extent was the project design, its objectives and expected results articulated in a coherent way? 2. To what extent has the project contributed to improving the household income of the 147 female-headed

households in the municipality?
3. What is helping or hindering the farmer associations to optimise their production and marketing capacity? Sustainability:
4. How well were the municipal bodies involved in the design, implementation and monitoring of the project?

5. To what degree do the ve farmer associations demonstrate the technical capacity to participate in e ective value chains, and to what extent are they able to draw bene ts therefrom?

54 This project example is based on a real ADA-funded project.

22

IV. THE EVALUATION PROCESS IN 15 STEPS

PLANNING, PREPARING AND COMMISSIONING THE EVALUATION

Evaluation design and approach:

The evaluation follows a mixed-methods approach using non-experimental design. The methods planned are a document review, 20 key informant interviews (of which 15 will take place in the eld), one focus group discussion with ten women from female-headed households and participant observation at farmer association meetings.

Budget estimate for the evaluation consultancy:55

 

INCEPTION

INQUIRY

ANALYSIS & SYNTHESIS

REPORTING

RESERVE

FEES FOR ONE EVALUATOR (€700/D)56

7 days

15 days

4 days

7 days

2 days

€4,900

€10,500

€2,800

€4,900

1,400

TRAVEL AND SUBSISTENCE ALLOWANCES

 

€1,200

   

MISCELLANEOUS

€500

 

TOTAL

€26,200

 

The largest cost position in the budget estimate for the conduct of an evaluation is the consultant fee for the eval- uator(s). For this exemplary evaluation, one evaluator is needed for 35 working days. It is important to remember that working days do not mean calendar days. They do not include weekends and public holidays. If more than one evaluator is needed, the number of working days needs to be adapted accordingly and additional days for joint work and coordination must be calculated.

This above calculation is based on the following considerations:

For inception, seven working days are required for a kick-o meeting, initial document review, developing the evaluation design (methodology and methods) and the drafting and nalisation of the inception report and related annexes. A document review, a total of 20 interviews and one focus group discussion are planned and need to be calculated into the costs pertaining the implementation phase.

15 working days are calculated for doing inquiry (and related preparations), of which two for travel, eight for 15 interviews, the focus group discussion and observations in the eld, ve for document review and ve virtual key informant interviews. Next, for analysis and synthesis, a minimum of four working days is needed to process and analyse the interview data57 in order to derive ndings, conclusions and recommendations. This includes half a working day for the presentation of preliminary ndings. For drafting the evaluation report, seven working days are calculated – assuming approximately ve pages per day of report writing and an additional day to account for the feedback and review process.

The cost calculation for travel and subsistence includes one regional ight (the ToR aim for an evaluator from the region), local transport as well as a per diem for accommodation and other subsistence costs occurred by the evaluator during the eld mission. The cost position ‘miscellaneous’ includes costs for communication, copying/ printing and a software licence (e.g. MAXQDA). Finally, two working days are calculated as reserve days.

55 This calculation is based on net amounts. Value added tax (VAT), as applicable, has to be added when considering the total costs.
56 The daily fee displayed here is an example. Applicable fees vary and will be lower or higher based on a number of factors, including the local context, the scope and complexity of

the evaluation as well as the evaluator’s expertise. Daily fees should always respect the principles and standards that govern ADA’s work. See ADA 2018b. 57 One of the most common approaches is qualitative content analysis (e.g. Mayring, 2014).

23

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

PLANNING, PREPARING AND COMMISSIONING THE EVALUATION

STEP 6

Develop Terms of Reference (ToR)

The Terms of Reference (ToR) of an evaluation bring together the conceptual thinking undertaken during all the previous steps in the evaluation process (Step 1 to Step 5). As such, they outline 1. Why the evaluation is being under- taken (purpose, objectives and users), 2. What is being examined and why now (scope and time), 3. The key criteria and evaluation questions being addressed and how these could be answered (methodology and methods), 4. The available budget and 5. The expected timeframe and deliverables. As with the earlier steps in the evaluation process, evaluation managers should follow a consulta- tive process when developing the ToR.58 This is important to establish a shared understanding of the evaluation purpose and to clarify and manage expecta- tions among relevant stakeholders. It also helps capitalise on existing know- ledge and facilitates ownership of the evaluation process.

The ToR are a key reference document that will become part of the contractual agreement between the commissioning organisation and the external evalua- tor(s). They set out the overall framework and determine the general direction of an evaluation. As such, they serve as the key frame of reference for evaluator(s) when developing a proposal for conducting an evaluation. There are no stan- dards with regards to the length of ToR. Yet the ToR need to be comprehensive and concise in spelling out the key parameters of an evaluation including the expectations and requirements with regards to the evaluator(s)’ quali cations, the estimated timeframe and a budget range based on the calculations in

Step 5. The ToR should also include a contact person for evaluator(s) to refer to for clari cations and questions and set a realistic timeframe for the submission of proposals. Annex 4 provides a checklist for evaluation managers and ADA programme and project managers to consider when developing and approving the ToR.

Similarly, there are no standards with regards to the size and composition of evaluation teams. While one evaluator may su ce for the evaluation of projects that are of smaller size and complexity, the evaluation of larger and more com- plex (e.g. multi-country) programmes may require two (or more) evaluator(s), one lead and one (or more) support evaluators. Especially for more complex evaluations it is recommended that evaluation teams combine international and local evaluators. Having an appropriate gender-mix within the evaluation team is also important, especially in cultures with strictly assigned male and female gender roles. It is important for evaluation managers to specify the require- ments or preferences with regards to the size and composition of the evaluation team in the ToR, and to budget accordingly, so that evaluator(s) can take this into account when submitting a proposal.

58 This may be done through the establishment of an Evaluation Reference Group (ERG), which is generally composed of a small number of key stakeholders and intended users, which supports and provides inputs at key stages of the evaluation process, for example, the evaluation design (scoping and ToR), the preparation of a stakeholder matrix, the draft inception report, the presentation of preliminary ndings and the draft evaluation report (including the re nement of recommenda- tions).

KEY POINTS IN BRIEF

> ToR provide the reference frame- work for an evaluation and for external evaluator(s) to develop proposals.

> Make sure that the requirements and expectations for the evalua- tion and the evaluator(s) are clearly stated and that the timeframe is realistic.

24

PLANNING, PREPARING AND COMMISSIONING THE EVALUATION

IV. THE EVALUATION PROCESS IN 15 STEPS

STEP 7

Select and commission evaluator(s)

The process of nding good and available evaluator(s) can be di cult and time-consuming. For evaluation managers to reach the best quali ed evalu- ator(s) for conducting a particular programme or project evaluation, the ToR should be disseminated widely, targeting professional organisations and com- petent individuals.

There are several places to begin the search for (an) appropriate evaluator(s). One option would be to target evaluators and consulting rms that are already known to the commissioning organisation from previous assignments, or that were found via a targeted internet search. Other options for obtaining o ers may include the advertisement of the ToR59 (e.g. in newspapers or magazines) and/or their publication on the Internet or via known e-mail distribution lists for networks of freelance evaluators and consulting rms. Moreover, many national, regional and international evaluation networks/societies60 are a good source for disseminating ToR, as many of these have a free newsletter function accessible to non-members as well.

Programme and project evaluations need to be commissioned following an open, transparent procurement process in keeping with the Federal Procure- ment Act61 or other appllicable procurement law and relevant rules stipulated by ADA.62 The selection of the evaluator(s) is done on the basis of a technical and a nancial (price) o er, which are assessed against the requirements set out in the ToR. For the assessment itself, it is recommended that the evaluation man- ager develops an assessment matrix, and that an assessment commission63 is established to ensure impartiality and objectivity in the selection process.

When planning an evaluation, it is important to allocate su cient time for the procurement process, including for the advertisement of Terms of Reference. Experienced and well-quali ed evaluators typically have limited availability and must often be contracted months in advance to ensure availability. Evaluation managers are therefore encouraged to reach out to potential evaluator(s) and inquire about their interest and availability for submitting an o er soon after the decision for conducting an evaluation is made.

KEY POINTS IN BRIEF

> Circulate the ToR timely and wide- ly, using di erent communication channels and targeting quali ed organisations and individuals.

> Commission the evaluator(s) in line with Austrian (or other appli- cable) procurement law.

TIP

LinkedIn is a great tool for targeting invitations (either to personal account or evaluation groups).

  1. 59  For the procurement of services above a certain budgetary threshold, the advertisement of ToR via speci c channels and the minimum number of o ers to be obtained are prescribed in the Federal Procurement Act. (see BVergG 2018 as last amended)

  2. 60  Such as the European Evaluation Society (EES), the DeGEval Evaluation Society (DeGEval), the Swiss Evaluation Society (SEVAL), the Vienna Evaluation Network (VEN), and Voluntary Organizations for Professional Evaluation (VOPEs). They are a particularly useful platform for distributing ToR for country-speci c evaluations.

  3. 61  BVergG 2018 as last amended.

  4. 62  See ADA’s General Terms and Conditions of Contracts.

  5. 63  For the procurement of services above a certain budgetary threshold, evaluation committees are prescribed by the Federal

Procurement Act (see BVergG 2018 as last amended).

25

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

INCEPTION

STEP 8

Kick-off and clari cation meeting

In most cases, the implementation phase begins with a kick-o and clari cation meeting between the evaluation manager and the evaluator(s). The meeting, which can be held either in person or virtually, provides an opportunity for both parties to clarify the mandate and mutual expectations and to have a sub- stantive discussion on how the evaluation will be carried out. It further serves to provide the evaluator(s) with background information on the programme or project being evaluated and a preliminary stakeholder mapping, and may also involve discussing administrative issues (e.g. invoicing). The meeting should

be further used to share available documents and data and to clarify which additional information will be made available to the evaluator(s), how and by when. This will facilitate a subsequent review of documents and data, includ- ing of their quality, by the evaluator(s) and help prepare the evaluation matrix (see Step 9). For a checklist on what documents to share with the evaluator(s), please consult Annex 3.

It is important that the kick-o and clari cation meeting takes place after the signing of the contract with the evaluator(s) as sensitive documents and data should not be handed over before the start of the contractual relationship. It is also advisable that the meeting is documented to ensure a common under- standing on next steps in the evaluation process.

The evaluation matrix

Developing an evaluation matrix is the rst task undertaken by the evaluator(s) when developing the inception report and forms an integral part thereof. The matrix is a planning tool, which helps ensure that the evaluation will be able
to address and answer all evaluation questions in a su ciently robust manner. When developing the matrix, the evaluator(s) need to carefully review and re ne the evaluation questions as stated in the ToR. They may also suggest to refor- mulate, regroup and reprioritise and sometimes even remove questions as long as this is justi ed and agreed upon with the evaluation manager and the ADA programme and project manager.

The evaluation matrix should clearly show and map out how data will be collected against each evaluation question and how triangulation (see Step 4) between di erent data sources and methods will be accomplished. The eval- uation matrix is also used as a basis for designing the various data collection tools and instruments, such as (semi-)structured interview guides. In terms of content, there is no single agreed format yet it is recommended that an evalua- tion matrix contains at least the following elements:

  • Evaluation criteria

  • Evaluation questions

  • Indicators

  • Sources

  • Methods for data collection

KEY POINTS IN BRIEF

> Use the meeting to provide con- text, clarify expectations and un- clear issues, review the workplan and agree on next steps.

> Ensure the meeting takes place after (and not before) signing the contract with the evaluator(s).

STEP 9

KEY POINTS IN BRIEF

> The evaluation matrix sets out a plan for answering each evaluation question.

> Make sure that the evaluation matrix includes 1. evaluation criteria, 2. evaluation questions, 3. indicators, 4. sources and

5. methods for data collection.

TIP

Evaluators may reformulate, re- group, reprioritise and sometimes even remove evaluation ques- tions as long as this is justi ed and agreed upon with the evalua- tion manager and the ADA PPM.

26

IV. THE EVALUATION PROCESS IN 15 STEPS

INCEPTION

Please consult Annex 7 for a template of an evaluation matrix and refer to Table 2 for an illustrative example of how an evaluation matrix may be lled in:

Table 2: Example of an evaluation matrix (excerpt)

EVALUATION QUESTION

INDICATORS

SOURCES

METHODS FOR DATA COLLECTION

Evaluation criterion: Relevance

1.To what extent do the in- terventions of the individ- ual grant partners form a coherent child protection programme in Moldova?

Evidence of alignment of project activities with the overall donor programme

Evidence of internal coherence of the supported grants with the overall programme theory

Programme and project documentation (incl. pro- gramme models), policy documents

Partner workplans, progress and performance reports

Key Informants (KI) incl. grant partners

Systematic document review

Technical analysis and testing of strategies

Semi-structured interviews with grant partners

Evaluation criterion: Impact

2. To what extent has the programme and its grants helped to make a positive change in social service delivery for children?

Contribution of the programme to the pro- gressive realisation of children’s rights (none/ modest/signi cant) with regards to:

  • Access to quality child protection and health services

  • Enabling environment to grow up in a safe, supportive environment

  • Alignment of child protection systems with UN Guidelines on Alternative Care

Programme and project documentation, policy documents, evaluations

Programme bene ciaries: foster families, children

Key Informants (KI) incl. grant partners, government partners, CSO stakeholders

Systematic document review

Observation of foster families Story-telling

Semi-structured interviews with 1. grant partners;
2. government institutions dealing with children’s rights; and 3. CSO stake- holders

27

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

INCEPTION

STEP 10

The inception report

The inception report (IR) is the rst key deliverable of the evaluator(s). It serves as a roadmap for the evaluation and helps ensure a shared understanding between the evaluator(s), the evaluation manager and the ADA programme and project manager concerning workplan, deliverables and timeframes. Impor- tantly, it further outlines the evaluation design and presents the data collection and analysis methods and tools to be used. The IR has yet another function: To identify potential risks and limitations along with adequate mitigation strategies. It is important to note that the evaluation approach presented in the IR may di er from the one set out in the ToR, as additional insights may have become available during inception. For example, a review of documents by the evalua- tor(s) may reveal that an evaluation question cannot or only partly be answered, or that it needs rewording given the limited availability or quality of data. Sim- ilarly, the stakeholder mapping may uncover that certain tools such as focus group discussions are not feasible in a particular programme or project context given security concerns or limited access to/by certain population groups.

It is important to allow su cient time for preparing, reviewing and nalising the inception report, which needs to be approved by the ADA programme and proj- ect manager. The review process may encompass several rounds of feedback in order to meet the quality standards set by ADA. Please consult Annex 5 for

a checklist on what should be included in the inception report and Annex 8 for a template of a feedback matrix that may be used during the review process. Only after the inception report is approved in writing by the ADA PPM, can data collection begin, including any potential eld missions.

KEY POINTS IN BRIEF

> Make sure that the inception report includes
1. a preliminary desk review

summary;
2. an evaluation matrix;
3. a stakeholder mapping; and 4. a workplan.

> Allocate enough time for internal review and approval processes.

TIP:

Make sure the focus of the inception report is on the meth- odological part, not on context description.

28

IV. THE EVALUATION PROCESS IN 15 STEPS

INQUIRY

STEP 11

Data collection and analysis

Data collection

Data collection refers to the process of obtaining multiple types of data and in- formation for evaluator(s) to be able to make an informed judgement about the programme or project being evaluated. It also entails organising and structuring the collected data, paving the way for data analysis. A rigorous evaluation pro- cess requires data to be collected from a variety of di erent stakeholders and sources, using di erent (qualitative and quantitative) data collection methods and tools. It also requires the triangulation of data, sources and methods (see Step 4) in order to contribute to obtaining valid and credible ndings.

The evaluation manager’s primary role during data collection is to facilitate access to stakeholders. This entails striking a careful balance between providing the support needed for the evaluator(s) on the one hand, while maintaining the necessary distance to warrant independence, on the other. Even if the evaluation manager is involved in organising meetings or visits, it is important that only the evaluator(s) or other members of the evaluation team (e.g. interpreters) partic- ipate(s) in data collection. For an evaluation to be credible, useful and subse- quently used, it is paramount that the external evaluator(s) retain their indepen- dence and are seen to be independent throughout the evaluation process.

Data analysis

Data analysis refers to the process of transforming the collected data into ndings, which in turn form the basis for deriving conclusions and recommen- dations. This step is sometimes neglected (probably because it is the least visible), often resulting in under-budgeting of required consultant days.

Data analysis consists of two subsequent steps:

1. Data processing and 2. Data interpretation, as illustrated below (Figure 2): Figure 2: The logical ow from data processing to data analysis

KEY POINTS IN BRIEF

> It is important to draw on multiple sources and to triangulate data in order to have valid evidence.

> Make sure data analysis is ade- quately budgeted for and docu- mented in the inception report and evaluation report.

TIP:

Make sure that gender-sensi- tive language is used and that data collected is disaggregated by gender and other relevant dimensions.

DATA PROCESSING

(PURELY DESCRIPTIVE)

Example:

“30% of all participants have been able to expand their business”

DATA INTERPRETATION

(COMPARATIVE ASSESSMENT)

Example:

“This number is twice as high as last year.”

29

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

INQUIRY

Data processing involves the structuring and cleaning of data as well as ensuring data accuracy and plausibility. This is a purely descriptive process which may involve the use of statistical anal- ysis tools (for quantitative data) or coding schemes (for qualitative data), the cleaning of data sets and the running of plausibility checks. Data interpretation on the other hand sets the collected data into a speci c context. This is an analytical process involving the comparative assessment of data by the evaluator(s). Table 3 provides an overview of some of the most common data analysis methods.64

Table 3: Possible methods of data analysis

DATA ANALYSIS METHOD

BRIEF DESCRIPTION OF THE METHOD

CONTENT ANALYSIS

A common approach to analysing qualitative data. The recoded data is reviewed and analysed for trends and patterns. The most common software programmes to facilitate such an analysis are NVivo, Atlas and MAXQDA (it can also be done by Excel or even by hand).65

CONTRIBUTION ANALYSIS

An approach used to assess the performance of programmes and projects by exploring cause and e ect and studying their contribution to observed change. This includes verifying their underlying theory of change and, importantly, seeking out other factors that may have in uenced results and should be taken into consideration when establishing the contribution made by a programme or project to observed results.

COST-BENEFIT ANALYSIS (CBA)

An approach for assessing e ciency by calculating and comparing positive and negative consequences of an intervention in monetary terms. CBA assigns values to di erent items and uses methods to assess people’s willingness to pay for the bene ts they will receive as the result of an intervention. It may be best used as part of a multi-criteria analysis.

COST-EFFECTIVENESS ANALYSIS

An approach for assessing whether results are being achieved at a reasonable cost. It typically considers the cost per unit of a service given or the cost per bene ciary. It is particularly useful when unit costs can be compared with other similar interventions.

MULTI-CRITERIA ANALYSIS

A set of methods that address cost and bene ts of an intervention taking into account monetary values and non-monetary values relevant for a successful intervention (e.g. time savings, project sustainability and social and environmental impacts).

STATISTICAL ANALYSIS

A way of summarising and analysing quantitative data usually obtained from surveys. Descriptive statistics are used to understand characteristics of the sample studied (e.g. income range, average age, etc.), while inferential statistics are used for testing hypotheses and drawing conclusions about a larger population set. SPSS is one of the most commonly used statistical software packages.

30

  1. 64  Adapted from UNODC (2017:142-143). These methods are listed in alphabetical order and not in order of relevance.

  2. 65  See Mayring (2014) for a detailed description of qualitative content analysis.

SYNTHESIS

IV. THE EVALUATION PROCESS IN 15 STEPS

STEP 12

Findings, conclusions and recommendations

There needs to be a clear logical ow, or process of analysis, leading from the ndings to conclusions and recommendations of an evaluation. Findings should be backed by triangulated data and information, which requires the analysis of (qualitative and quantitative) data from di erent (primary and secondary) sourc- es (see Step 11). Conclusions should derive from ndings, and re ect a shift in thinking and analysis from “what” to “so what” based on the judgements and interpretations of the evaluator(s). Recommendations, meanwhile, should be based on conclusions and need to be clear, actionable and targeted to speci c stakeholders in order to be useful and used. Figure 3 illustrates the logical ow from ndings through to conclusions and recommendations.66

Figure 3: Logical ow from ndings, conclusions to recommendations

KEY POINTS IN BRIEF

> Make sure there is a logical ow from ndings to conclusions and recommendations.

> Findings must be based on trian- gulated data and information.

> Recommendations must be clear, actionable and targeted to specif- ic stakeholders.

RECOMMENDATIONS

PROPOSED ACTIONS FOR MANAGEMENT

CONCLUSIONS

INTERPRETATIONS AND JUDGEMENTS BASED ON – THE FINDINGS

FINDINGS

EMPIRICAL FACTS COLLECTED DURING THE EVALUATION

FINDINGS

EMPIRICAL FACTS COLLECTED DURING THE EVALUATION

FINDINGS

EMPIRICAL FACTS COLLECTED DURING THE EVALUATION

This step requires a lot of conceptual clarity and should be undertaken with great care and su cient time. It is important that the logical ow from ndings to conclusions and recommendations is evident and well-documented in the evaluation report (see Step 13). It is also important that con dentiality is en- sured and that evaluation ndings cannot be traced back to individual sources. While there is no standard with regards to the number of recommendations, it is advised that recommendations are manageable, targeted to speci c stakehold- ers and pitched at a su ciently high level to allow room for formulating speci c implementation measures when developing the management response (see Step 15). Recommendations should also to clear and actionable in order to be put to use.

66 Adapted from USAID (2010)

31

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

SYNTHESIS

STEP 13

The evaluation report

The evaluation report is the most tangible deliverable in the evaluation process. It is always approved by the ADA PPM.

During this step, the evaluator(s) rst prepare a draft evaluation report. The draft report should be well-written and carefully presented, following the report structure outlined in Annex 6. It is reviewed by the evaluation manager who should engage other stakeholders67 in the process, as relevant. The evaluation manager is encouraged to use a feedback matrix (see Annex 8) to systematical- ly collect, document and share feedback with the evaluator(s) in a transparent manner. In terms of content, the feedback should primarily focus on assessing the factual correctness of statements, the logical ow and presentation of evi- dence (see Step 12) and the adherence to agreed standards68 and approaches set out in the inception report. The evaluator(s) are not required to incorporate all the feedback as this may jeopardise their independent judgement. They do, however, need to correct factual errors and provide a justi cation when feed- back is not taken on board.

In addition to the formal written feedback process outlined above, there are other consultation formats to help foster dialogue between the evaluator(s) and key stakeholders, validate ndings and sharpen recommendations throughout the evaluation process. Table 4 provides an overview of the di erent consul- tation formats, their main purpose and timing along the evaluation process – whether in person or virtually, in the eld or at Headquarters.

KEY POINTS IN BRIEF

> Make sure that the report presents evidence-based ndings, conclu- sions and recommendations.

> Use a feedback matrix when com- menting the draft report and leave it up to the evaluator(s) to decide whether the feedback is accepted or rejected (with justi cation).

TIP:

Allow su cient time for the written feedback and set clear deadlines for commenting the draft evaluation report.

Table 4: Di erent feedback formats and purposes

WHAT

MAIN PURPOSE

WHEN

DEBRIEFINGS

Make factual corrections and validate ndings, fairness and respect for interview partners/target groups

Directly after data collection in the eld

PRESENTATION OF PRELIMINARY FINDINGS

Ensure ownership, clarify points that may have been misunderstood, opportunity to arrange follow-up interviews as deemed necessary by the evaluator(s)

After data analysis

WORKSHOP TO REFINE/SHARPEN RECOMMENDATIONS

Allow opportunity for corrections/adjust- ments to sharpen recommendations and increase future use

When ndings, conclusions and draft recommendations have been developed

PRESENTATION OF DRAFT/FINAL REPORT

Disseminate results, promote use and foster learning

After the approval of the nal (draft) evaluation report

32

  1. 67  E.g. ADA programme and project managers in the case of partner-led evaluations, thematic experts, evaluation advisors

  2. 68  Including those set out in these guidelines and in the Evaluation Policy (ADC, 2019).

SYNTHESIS

The evaluation report also provides background and context to the programme or project being evaluated, its main purpose is to present evidence-based nd- ings, conclusions and recommendations. These three elements form the core part of any evaluation report. At the same time, the credibility of the evaluation and its ndings, conclusions and recommendations rest on the evaluation de- sign and approach taken to answer the evaluation questions, the data collec- tion and analysis methods used, and the measures taken to mitigate risks– all of which need to be properly documented and presented in the evaluation report.

An evaluation report must also include an executive summary, which will be read more widely and closely than any other part of the report. It is often the executive summary of an evaluation report that is published on the ADA web- site.69 The executive summary should therefore be developed as a stand-alone document that mirrors the structure of the evaluation report (see Annex 6). As such, it should not contain any new information. As in the report, emphasis should be placed on presenting the ndings, conclusions and recommenda- tions. While there is no standard length of an executive summary, it is recom- mended that it does not exceed 3 to 4 pages to ensure easy access and use by (often busy) stakeholders.

Every evaluation report needs to be submitted with a completed Results As- sessment Form (RAF), which captures the degree of results achievement of a particular project and programme at di erent (output, outcome and possibly, impact) levels. This form (see Annex 9) needs to be completed jointly by the evaluation manager and the ADA PPM (Part 1) and the evaluator(s) (Part 2) and must be submitted in Excel format to facilitate the subsequent analysis of RAFs. This will allow for broader conclusions to be drawn with regards to the overall e ectiveness of programmes and projects funded or implemented by ADA.

IV. THE EVALUATION PROCESS IN 15 STEPS

TIP:

Allow time to write and review the executive summary. It likely will be the most widely read part of the report!

69 See ADA (2018a)

33

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

WORKING WITH THE FINDINGS

STEP 14

Disseminate evaluation ndings

The timely availability of evaluation reports along with the e ective presentation70 and communication of ndings are key requirements to promote their utilisation and use.71 In addition, the transparent publication of evaluation reports contrib- utes to enhanced credibility. ADA’s Public Disclosure Policy requires that exec- utive summaries and RAFs of all evaluation reports of programmes and projects exceeding a certain threshold are made available via ADA website.72

To ensure maximum outreach and use of evaluation ndings, it is important
to consider the intended users and uses of an evaluation early on. Evaluation managers are therefore encouraged to develop a communication plan identi- fying potential channels and products to best meet the information needs of di erent audiences at the beginning of the evaluation process (see Step 1). Di erent recommendations of an evaluation may speak to di erent stakeholders within (and outside of) an organisation, pointing up the need for targeted and tailored communication approaches.73 It is also important to factor in the costs associated with communicating evaluation ndings and to make necessary arrangements accordingly (see Step 5), for example by costing the communica- tion plan.

Guiding questions to facilitate e ective dissemination of evaluation nd- ings may include:

  • Why does this information need to be communicated?

  • What do the di erent audiences need to know? What would they like to know?

  • Are there any special considerations or limitations to be kept in mind (e.g. patchy internet connection, language, high sta turnover)?

  • When is the best timing for dissemination (e.g. upcoming strategy revision, new planning cycle)?

    Evaluation ndings need to be presented and communicated in a user-friendly and accessible manner to facilitate use. For example, traditional and text-heavy evaluation reports may not speak to its intended users. Some approaches for e ectively communicating evaluation results may include story-telling elements and the use of visual aids such as diagrams, pictures, charts and graphs. A creative use of innovative information and communication technologies and social media channels (e.g. LinkedIn, twitter, Facebook, blogs, Flickr.com74) is equally important. The website FreshSpectrum75 is a great resource for explor- ing user-friendly innovative ideas for tailoring evaluation messages. Language is another factor that needs to be considered when disseminating evaluation ndings.

  1. 70  See ADA (2018a)

  2. 71  MFA 2019:12

  3. 72  ADA (2018b)

  4. 73  See ALNAP (2016:341)

  5. 74  Flickr.com may be used for an evaluation photo story.

  6. 75  https://freshspectrum.com

KEY POINTS IN BRIEF

> Develop a communication plan early in the process, when identifying use and users of an evaluation.

> Ensure alignment with the ADA Public Disclosure Policy.69

TIP:

Know your audience and tailor the message.

34

WORKING WITH THE FINDINGS

IV. THE EVALUATION PROCESS IN 15 STEPS

STEP 15

Coordinate management response and follow-up

A management response (MR) is an e ective tool to facilitate the utilisation of evaluation ndings.76 It allows relevant stakeholders to position themselves vis- á-vis the evaluation and its recommendations and to articulate how they will go about taking them forward. The process of developing a management response can also foster organisational learning and accountability.

The role of the evaluation manager is to coordinate the development of the MR bringing together relevant stakeholders targeted by the recommendations of an evaluation. Recommendations may be accepted, partially accept or rejected.
In case of the latter two, a rationale needs to be provided to substantiate the decision. In the MR, stakeholders de ne speci c measures, responsibilities and timeframes for implementing the recommendations (if accepted or partially accepted). A template for developing a management response to programme and project evaluations can be found in Annex 10.

It is important to allocate su cient time for developing the management re- sponse and to start the process soon after completion of the evaluation report. The MR should be nalised within a period of three months. To help ensure timely implementation and use of recommendations, it is advised to regularly monitor and update the status of implementation of the management respon- se,77 and to de ne clear roles and responsibilities in that regard. The extent to which the recommendations of a programme or project evaluation are imple- mented may also be assessed as part of a future evaluation.

KEY POINTS IN BRIEF

> Developing a management response is a collaborative e ort that is coordinated by the evalua- tion manager.

> Make sure the management response is developed within three months after completion of the evaluation report, and that its implementation is monitored on a regular basis.

  1. 76  MFA 2019:12

  2. 77  If not all recommendations can be implemented immediately it might be useful to specify a prioritised time frame.

35

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

DESIGN IMPLEMENTATION UTILISATION

36

SUMMARY OF KEY OUTPUTS ALONG THE EVALUATION PROCESS

Figure 4 summarises the key evaluation outputs to be delivered in each phase of the evaluation process, be it by the evaluation manager (design and utilisation phase) or by the evaluator(s) (implementation phase). The ADA

Programme and Project Manager approves all key outputs. Figure 4 also highlights relevant tools provided in these guidelines (see Annexes 2-10) to facilitate their develop- ment and quality assurance, for ease of reference and use.

IV. THE EVALUATION PROCESS IN 15 STEPS

Figure 4: Key outputs along the evaluation process

DESIGN

  1. 1  Frame the evaluation interest and use

  2. 2  Detail purpose and objectives

  3. 3  De ne key evaluation questions

  4. 4  Outline evaluation design and approach

  5. 5  Estimate the budget

  6. 6  Develop Terms of

    Reference ToR

  7. 7  Select and commission the evaluator(s)

    KEY OUTPUT

    TERMS OF REFERENCE

    USEFUL TOOLS

D Scoping Exercise (Annex 2) D Checklist of Documents for

PP Evaluations (Annex 3)

D Quality Checklist for Terms of Reference (Annex 4)

IMPLEMENTATION

INCEPTION

8 Kick-off and clari cation meeting

9 The evaluation matrix

10 The inception report INQUIRY

11 Data collection and analysis SYNTHESIS

12 Findings, conclusions and recommendations

13 The evaluation report KEY OUTPUT

INCEPTION REPORT EVALUATION REPORT

USEFUL TOOLS

D Quality Checklist for Inception Report (Annex 5)

D Quality Checklist for Evaluation Report (Annex 6)

D Template for Evaluation Matrix (Annex 7)

D Template for Feedback Matrix (Annex 8)

D Results Assessment Form (Annex 9)

UTILISATION

WORKING WITH THE FINDINGS

PLANNING, PREPARING AND COMMISSIONING THE EVALUATION

14 15

Disseminate evaluation ndings

Coordinate management response and follow-up

KEY OUTPUT

MANAGEMENT RESPONSE

USEFUL TOOLS

D Template for Management Response (Annex 10)

37

BETTER DEVELOPMENT RESULTS

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

ANNEXES

ANNEX 1

Work ows for partner-led and ADA-led evaluations

D The work ows below describe the work ow for partner-led and ADA-led evaluations. They are designed to visualise and thus easily outline roles and responsibilities along the evalua- tion process.

PARTNER-LED EVALUATIONS

WORK PLAN / PROGRAMME AND PROJECT (PP) PLANNING PHASE

ADA discusses within and across relevant organisational units whether an evaluation should be planned in relation to a particular proposed pro- gramme or project and states related expectations, including a reference to the ADA guidelines for PP evaluations, at tendering/procurement stage.

ADA PPM must consult with his/ her organisation- al unit and can consult with ADA EVAL before taking a decision

If it is decided that no evalu- ation will take place, ADA PPM must document this, including the reasoning in ADA’s Funding

If an evaluation is planned, ADA PPM ensures before contract nalisation that a preliminary scoping is done and a rst cost estimate and timing are in line with the standards set by the ADA guidelines for PP evaluations

Applicant IP includes this in the PP proposal.
ADA PPM documents decision after PP is approved

Management System (FMS)

ADA PPM and IP discuss and document how learning will take place nevertheless during process cycle. IP documents this in project document and budgets for it, ADA PPM doc- uments this in ADA’s Funding Management System (FMS)

Ensure budgetary exibility in case the need for an evaluation arises at a later stage

!

ADA PPM can ask ADA EVAL to join this discussion
or meet them in preparation

ADA PPM can and in some cases, where ADA’s Disclosure Policy applies, must in- clude ADA EVAL in quality check loop

DESIGN PHASE

ADA PPM and IP hold a conference call/meeting to conduct scoping, de- ne the evaluation’s purpose and objectives and identify key evaluation questions, relevant criteria and adequate approach and methodology

IP drafts ToRs and submits them for comments and quality check to ADA PPM within agreed deadline

ADA PPM comments the draft ToR and performs quality check and returns them to IP for review and nalisation

IP nalises the ToR and sends them to ADA PPM for approval ADA PPM reviews and approves ADA PPM reviews and

ToR rejects ToR

IP disseminates the ToRs and selects and commissions the evaluator(s)

Additional feedback loop

38

ANNEXES

IP and ADA PPM provide documents and key informant contacts

ADA PPM can and in some cases, where ADA’s Disclosure Policy applies, must in- clude ADA EVAL in quality check loop

IP and ADA PPM provide docu- ments, and key informant contacts

ADA PPM can and in some cases, where ADA’s Disclosure Policy applies, must in- clude ADA EVAL in quality check loop

IP invites evaluator(s) for kick-off and clari cation meeting/call

Evaluator(s) undertake(s) desk review, preliminary interviews and anal- ysis and draft(s) Inception Report (IR) and submit/s it for comments and quality check to IP within agreed deadline

IP comments/quality checks the draft IR and submits it to ADA PPM

ADA PPM comments the draft IR and performs quality check and returns it to IP for submission to evaluator(s)

Evaluator(s) nalise(s) the IR and send(s) it to IP for nal review. IP sends IR then to ADA PPM for nal review and approval

ADA PPM reviews and rejects ADA PPM reviews and approves ToR ToR

Evaluator(s) conduct(s) inquiry, process(es) data, perform(s) analysis and synthesis and present(s) preliminary ndings to IP and PPM, if available. Feedback is provided, clari cations made and next steps and deadlines agreed

Evaluator(s) draft evaluation report, submit(s) it to IP. IP comments and provides quality check and forwards it to ADA PPM for further com- ments and quality check

ADA PPM comments the draft report and performs quality check (includes oder ADA stakeholders and ADA EVAL in feedback loop) and returns it to IP for review and forwarding to evaluator(s)

Evaluator(s) nalise(s) the evaluation report and send it back to IP for nal review IP sends it back to ADA PPM for approval

ADA PPM reviews and approves ADA PPM reviews and rejects

Additional feedback loop

Additional feedback loop

evaluation report

evaluation report

UTILISATION PHASE

IP disseminates or/and publishes the evaluation report and initiates management response

ADA PPM arranges for the publication of the executive summary of evaluation report in line with ADA’s Disclosure Policy

39

IMPLEMENTATION PHASE

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

ADA-LED EVALUATIONS

WORK PLAN / PROGRAMME AND PROJECT (PP) PLANNING PHASE

ADA PPM discusses within and across relevant organisational units whether a particular proposed programme or project should be evaluated and takes a related decision in line with the ADA guidelines for PP evaluations

ADA PPM must consult with his/ her organisational unit and can consult with ADA EVAL before taking a decision

If it is decided that no evaluation will take place, ADA PPM must document this, including the reasoning

ADA PPM discusses and document how learning will take place nevertheless during process cycle. PPM documents this in proj- ect document and budgets for it, ADA PPM documents this In FMS

Ensure budgetary exibility in case the need for an evaluation arises at a later stage!

If an evaluation is taking place, ADA PPM conducts a preliminary scoping and delinates a rst cost estimate and timing in line with the standards set by the ADA guidelines for PP evaluations

ADA PPM re ects this decision in the relevant PP documentation

DESIGN PHASE

Evaluation manager and ADA PPM conduct scoping, de ne the evalu- ation’s purpose and objectives and identify key evaluation questions, relevant criteria and adequate approach and methodology

Evaluation manager drafts ToRs, submits them for comments with relevant ADA colleagues at HQ or in the eld

Evaluation manager consolidates ToR based on the comments received and performs a quality check

ADA PPM nalises ToR and approves them. Evaluation manager disseminates ToR and manages tendering process

!

ADA PPM can ask ADA EVAL to join this discussion or meet in preparation

ADA PPM can and in some cases, where ADA’s Disclosure Policy applies, must in- clude ADA EVAL in quality check loop

40

ANNEXES

IP and ADA PPM provide docu- ments, and key informant contacts

ADA PPM can and in some cases, where ADA’s Disclosure Policy applies, must in- clude ADA EVAL in quality check loop

Evaluation manager invites evaluator(s) for kick-off and clari cation meeting

Evaluator(s) undertake(s) desk review, and preliminary interviews and analysis and draft(s) Inception Report (IR) and submit(s) it to evaluation manager for comments within agreed deadline

Evaluation manager and ADA PPM comment the draft IR and perform quality check. Evaluation manager returns it to evaluator(s) for nalisation

Evaluation manager reviews it and forwards it to ADA PPM for review and approval

ADA PPM reviews and approves ToR ADA PPM reviews and rejects ToR

Evaluator(s) conduct(s) inquiry, process(es) data, perform(s) analysis and synthesis and present(s) preliminary ndings to evaluation manag- er, ADA PPM and other ADA colleagues. Feedback is provided, clari ca- tions made and next steps and deadlines agreed

Evaluator(s) draft(s) evaluation report and submit(s) it to evaluation manager

Evaluation manager and ADA PPM provide quality check. Evaluation manager returns it to evaluator(s) for nalisation

Evaluator(s) nalise(s) the evaluation report and send it back to evaluation manager for nal review and approval

Evaluation manager reviews it and forwards it to ADA PPM for review and approval

ADA PM reviews and approves ADA PM reviews and rejects

Additional feedback loop

Additional feedback loop

IP and ADA PPM provide documents and key informant contacts

Additional feedback loop

ADA PPM can and in some cases, where ADA’s Disclosure Policy applies, must in- clude ADA EVAL in quality check loop

evaluation report

evaluation report

UTILISATION PHASE

ADA PPM disseminates the evaluation report and arranges for the publication of the executive summary of the evaluation report in line with ADA’s Disclosure Policy

ADA PPM coordinates the management response and follow up

41

IMPLEMENTATION PHASE

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

ANNEX 2 Scoping Exercise

D The following questions78 are designed to support ADA programme and project managers and implementing partners in their rst brainstorming with regards to the evaluation. The initial scoping exercise can take di erent forms and shapes – meetings, phone calls, work- shops – and may be documented to help lay the ground for the elaboration of the Terms of Reference (ToR).

16. WHAT? (TOPIC)
Consider what is going to be evaluated:

  • >  What is the essence of the programme and project that you want to evaluate?

  • >  What is the context of the programme or project that you are looking at?

  • >  Do you want to look at the whole programme or project or only at (a) selected component(s)?

  • >  What do you want to exclude from the evaluation? Think of geographical, time-related, thematic, structural and/or other aspects that are of less interest or aspects that do not seem to be evaluable.

    17.WHY? (PURPOSE)
    Consider how the idea about the evaluation came up:

  • >  Why do you want to do this evaluation? Why now? Who is asking for it?

  • >  What kind of information would you like to obtain and which issues do you seek to address?

  • >  How are the results of the evaluation going to be used and by whom?
    How can the evaluation bene t the programme or project, other interventions, the organisation?

  • >  Are you interested in a speci c evaluation product besides the evaluation report and the executive summary (e.g. two-page brief, infographic page, poster, video, podcast)?

    18.WHO, WHAT, WHEN? (SCOPE)

    Consider what should be included and excluded from the scope of the evaluation:

  • >  What period do you want to cover by the evaluation?
    Do you want to look at the entire programme or project phase, multiple phases or a speci c period therein (i.e. the last two years)?

  • >  Who are the main stakeholders at di erent levels (HQ and eld) and what should their involvement in the process be?

  • >  What is the best timing for the evaluation to take place?
    Are there any speci c deadlines or processes to consider for the nal evaluation report to be available in time?

    19.QUESTIONS? (GENERAL AND SPECIFIC)

    Consider the issues that you want to learn about with regards to the programme or project:

  • >  Feel free to think of as many questions as you need at this stage – no matter if very general or very speci c. They will be netuned at a later stage.

  • >  Please share any working hypothesis or working assumption underlying your questions.

42

78 This scoping exercise is adapted from MSF (2019).

ANNEXES

ANNEX 3 Checklist of Documents for Programme and Project Evaluations

D The following documents may be useful for external evaluator(s) when conducting an evaluation of programmes and projects funded or implemented by ADA. They need to be provided by the evaluation manager to the external evaluator(s) at the time of the kick-o . Some documents may not be readily available and need to be obtained from relevant stakeholders in advance. The checklist is not exhaustive and needs to be adapted and expanded for each evaluation on a case-by-case basis, as appropriate.

ADA PP document/s, including annexes (log frame, budget, etc.) and revisions

 

PP progress report/s

 

PP nal report

 

ADA EGSIM related documentation
(manual, assessments, recommendations, etc.)

 

ADA risk assessment related documentation (manual, assessments, recommendations etc.)

 

Previous evaluations of the programme or project and related interventions (including earlier programme and project cycles)

 

ADA monitoring data (reports, notes, etc.)

 

ADA statistical data (markers etc.)

 

ADA trip reports

 

Etc.

 

43

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

ANNEX 4 Quality Checklist for Terms of Reference (ToR)

D This checklist79 is designed to support evaluation managers and ADA sta and implement-
ing partners when preparing the Terms of Reference for an evaluation. It provides the basic structure of the ToR and serves as guidance for evaluation managers when reviewing and ADA programme and project managers when approving the ToR.

D Terms of Reference should be structured as follows:

  1. Context and Background

  2. Purpose and Objectives

  3. Scope

1.

2.

3.

4.

79

  1. Evaluation Questions

  2. Design and Approach

  3. Workplan

  4. Evaluation Management Arrangements

  5. Requirements for the Evaluator(s)

  6. Speci cations for the Submission of O ers

  7. Annexes

Context and Background

Purpose and Objectives

Scope

Evaluation Questions

This quality checklist adapted from the United Nations Evaluation Group (UNEG) Quality Checklist for Evaluation Terms of Reference and Inception Reports. UNEG (2010a).

The economic, social and political context in which the programme and project is being implemented and evaluated is described.

 

The background of the programme or project being evaluated is described.

 

Reference is made to the mandate for conducting the evaluation.

 

The purpose of the evaluation is speci ed: why is the evaluation being conducted and why now.

 

The primary users of the evaluation and the expected evaluation use are identi ed.

 

The de ned objective(s) are realistic, achievable and consistent with the evaluation purpose.

 

The timeframe, programme or project phase, geographical area, and thematic focus to be covered by the evaluation are de ned.

 

The OECD/DAC evaluation criteria and any additional criteria that may be of use to guide the evaluation (i.e. evaluations of humanitarian response or normative programmes) are spelled out.

 

The scope is feasible given available resources and time considerations.

 

A tailored set of evaluation questions directly related to the evaluation objective/s and struc- tured along the OECD/DAC evaluation criteria is de ned.

 

The evaluation questions are formulated concisely, clearly and allow evidence-based an- swers taking into account the data that will be collected in the evaluation.

 

44

5. Design and Approach

ANNEXES

A clear description of the overall evaluation design, methodological approach and methods for data collection and analysis that may be used during the evaluation is included.

 

The proposed design, methodological approach and data collection and analysis methods are adequate to answer the evaluation questions.

 

It is spelled out how the human rights based approach (HRBA), ADC’s cross-cutting issues, as well as the basic principles and quality standards applying to ADA’s programme and project design should be incorporated in the evaluation design, approach and methods.80

 

The data collection and analysis methods suggested are su ciently rigorous to allow for a complete, fair and unbiased assessment.

 

The need for deploying multiple methods, drawing on di erent sources and triangulating information is highlighted.

 

It is speci ed that the evaluation will follow ADC and OECD/DAC norms and standards as well as ethical guidelines for evaluations (with reference to relevant documents).

 
  1. Workplan

  2. Evaluation Management Arrangements

  3. Requirements for the Evaluator(s)

A description of the key evaluation phases along with relevant deliverables, estimated working days, and timelines is included in the workplan.

 

The quality assurance process including for providing written feedback from the evaluation manager, the reference group (where applicable) and other stakeholders is factored in.

 

It is speci ed where evaluation management lies and whether a Reference Group will be established, along with relevant roles and responsibilities.

 

It is clari ed that evaluation management needs to respect the ethical standards and guiding principles for evaluation, including impartiality and independence.

 

It is speci ed that the evaluator(s) must not have been involved in the design or implementa- tion of the programme or project being evaluated.

 

The level and nature of (i) required evaluation expertise and experience, (ii) thematic and/
or geographical expertise and experience, and (iii) expertise and experience on the human rights based approach, gender responsive approaches and other areas of expertise as rele- vant to the speci c programme or project being evaluated are speci ed.

 

A gender balanced and diverse team is part of the requirements in case of o ers involving more than one evaluator.

 

The language skills required for the conduct of the evaluation are speci ed.

 

9. Speci cations for the Submission of O ers

10. Annexes

80 This may include participation of duty bearers and rights holders, especially women and vulnerable groups, the documentation of how data collection will be human rights based, foster environmental sustainability, gender sensitive, and include the disaggregation of data by sex, ethnicity, age, disability etc

It is speci ed that a technical and a nancial o er need to be submitted, as well as the ex- pected content and maximum length of each.

 

The weight given to the assessment of the technical and nancial o er is speci ed (as a percentage).

 

An estimated budget range for o ers is included.

 

A clear deadline (date/time/time zone) for submission of o ers and a contact address for the submission of are included.

 

A reference to the Evaluation Policy and to the ADA Guidelines for Programme and Project Evaluations is included.

 

A reference to key publicly accessible documents relevant to the programme or project being evaluated is included.

 

45

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

ANNEX 5 Quality Checklist for Inception Report (IR)

D This checklist81 is designed to provide guidance to evaluation managers and ADA pro- gramme and project managers when assessing and approving the inception report. It also serves as guidance for evaluator(s) when structuring the IR, to ensure that it meets ADA requirements.

D The inception report82 should be structured as follows:

  1. Background, Purpose and Objectives

  2. Evaluation Design and Approach

    2.1. MethodologyandMethods 2.2. EvaluationMatrix
    2.3. Data Collection Instruments 2.4. Data Analysis

    2.5. Limitations, Risks and Mitigation Measures

3. Quality Assurance and Ethical Considerations 4. Workplan
5. Annexes

  1. Background, Purpose and Objectives

  2. Evaluation Design and Approach 2.1. Methodology and Methods

The intervention logic of the programme or project being evaluated is depicted.

 

The purpose, objective(s) and scope of the evaluation are stated and in line with the ToR.

 

The primary users and the intended use of the evaluation are stated.

 

The methodological approach put forward in the IR is suitable to obtain reliable ndings in line with the evaluation purpose, objective(s) and questions as per ToR.

 

The stated objectives are realistic and achievable given the information that can be collected in the context of the evaluation.

 

Criteria and reference frameworks that evaluative judgements will be based upon are stated.

 

Means for quality assurance and triangulation are outlined.

 

Reference is made to how the selected methodology and methods will enable the application of ADA’s basic principles and cross-cutting issues as well as the human rights based approach and other approaches, such as the con ict-sensitive approach, as relevant.

 

2.2. Evaluation Matrix

  1. 81  This quality checklist is adapted from the United Nations Evaluation Group (UNEG) Quality Checklist for Evaluation Terms of Reference and Inception Reports. UNEG (2010a).

  2. 82  In addition to the speci c chapters outlined below, the report also should include a title page, a table of contents, a list lists tables/graphs and gures and a list of acronyms.

The choice of indicators, sources and methods used to answer the evaluation questions, and the triangulation thereof, is presented and mapped against each evaluation question.

 

46

2.3. Data Collection Instruments

2.4. Data Analysis
2.5. Limitations, Risks and Mitigation Measures

3. Quality Assurance and Ethical Considerations

4. Workplan

5. Annexes

ANNEXES

Data collection instruments to be applied during the evaluation are outlined.

 

The sequencing of data collection instruments is outlined and follows a logic.

 

Relevant interview partners are identi ed and approximate numbers indicated.

 

Key documents to be consulted are identi ed and approximate numbers indicated.

 

Reasonable sampling strategies are developed for each data collection instrument.

 

Tools (e.g. interview topic guides, questionnaires) are elaborated and annexed.

 

Data processing and interpretation are described.

 

The data analysis plan and methods is comprehensive and clearly presented.

 

All foreseeable limitations of the evaluation and the proposed methodology are highlighted and their implications on the evaluation are outlined.

 

Appropriate measures to mitigate the risks are proposed.

 

Means to ensure upholding of Standards and Principles for Good Evaluations83 are speci ed.

 

ADA’s basic principles, it’s human rights approach and commitment to cross-cutting issues are adequately re ected in evaluation design and
approach, including the evaluation questions and data collection tools.

 

Potential harms for participants of the evaluation and for evaluator(s) are identi ed and mitigation measures identi ed.

 

Approaches used to protect the con dentiality and anonymity of sourced are outlined.

 

Timelines and deliverables throughout the evaluation process are presented in a workplan.

 

Any changes or adaptations from the ToR agreed upon during inception are made explicit.

 

Data collection instruments, such as (semi-)structured interview guides, questionnaires

 

Comprehensive list of documents relevant for the evaluation.

 

Comprehensive list of stakeholders.

 

83 MFA 2019.

47

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

ANNEX 6

Quality Checklist for Evaluation Report (ER)

D This checklist84 is designed to provide guidance to evaluation managers and ADA programme and project managers when reviewing and approving the evaluation report. It also serves as guidance for evaluator(s) when structuring the evaluation report85,
to ensure that it meets ADA requirements.

D The evaluation report should be structured as follows:

  1. Executive Summary

  2. Introduction

  3. Background and Context Analysis

  4. Evaluation Design and Approach
    4.1. Methodological Approach
    4.2. Data Collection and Analysis Tools
    4.3. Limitations, Risks and Mitigations Measures

  5. Findings

  6. Conclusions

  7. Recommendations

  8. Annexes

1. Executive Summary 2. Introduction

3. Background and Context Analysis

Included as a stand-alone chapter in the evaluation report.

 

Includes the chapters 2-7 outlined above.

 

The purpose of the evaluation is clearly de ned, including why it is conducted at this point in time, who needs the information and how the information will be used.

 

The objective(s) of the evaluation is stated.

 

The scope of the evaluation is delineated.

 

Reference is made to the quality standards and criteria applied.

 

The context of key social, political, economic, demographic and institutional factors that have a direct bearing on the programme or project being evaluated is described.

 

The scale and complexity of the programme or project being evaluated are presented, including its components, geographic boundaries, purpose, management and budget (from all sources).

 

The key stakeholders involved in the design and implementation of the programme or project are mentoned, including implementing and other development partners, as well as their roles.

 

The logic model, theory of change and/or expected results at di erent levels are described.

 

The implementation status of the programme or project, including its phase and any signi cant changes that have occurred over time and their implications for the evaluation are explained.

 

48

  1. 84  This quality checklist adapted from the United Nations Evaluation Group (UNEG) Quality Checklist for Evaluation Terms of Reference and Inception Reports. UNEG (2010a).

  2. 85  In addition to the speci c chapters outlined below, the report also should include a title page, a table of contents, a list lists tables/graphs and gures and a list of acronyms.

4. Evaluation Design and Approach 4.1. Methodological Approach

ANNEXES

The methodological approach, including literature references, is described and justi ed.

 

A description of stakeholder’s consultation process in the evaluation, including the rationale for selecting the particular level and activities for consultation, is included.

 

An assessment of the design, implementation and monitoring of the programme/project being evaluated with a view to sound gender and human rights analysis as well as actual results on gender equality, environmental sustainability, human rights and other funda- mental principles of development cooperation through which cross-cutting issues are implemented is included.

 

A description of how the approach chosen re ects the basic principles underlying ADA’s work as well as the human rights based approach and the commitment to cross-cutting issues.

 

4.2. Data Collection and Analysis Tools

Data collection methods are described and the rationale behind their choice outlined.

 

The sampling frame – areas and populations to be represented, selection criteria and mechanics, sample size and limitations – is described and relevant choices justi ed.

 

A description of how data collection methods and related process employed re ects the basic ADA’s principles and commitments to human rights and cross-cutting issues.

 

Measures taken to ensure data quality, including evidence supporting the reliability and validity of ndings (e.g. interview protocols, survey design, observation tools) are de- scribed.

 

A description of what type of (source, method, data, theory) triangulation was employed.

 

4.3. Risks, Limitations and Mitigations Measures

5. Findings

Risk and limitations faced during the implementation of the evaluation are outlined, along with strategies employed to mitigate these.

 

Gaps and limitations in the evidence and/or unanticipated ndings are reported and discussed.

 

Relevance to evaluation criteria and questions is ensured.

 

Findings are based on evidence.

 

Triangulation is done and documented in relation to each nding to ensure credibility.

 

Findings are numbered and presented with clarity, logic and coherence.

 

ADA principles and commitments with regards to human rights and cross-cutting issues are integrated in the ndings.

 

49

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

6. Conclusions

7. Recommendations

Reasonable evaluative judgements based on the ndings and substantiated by the evidence presented is given and traceable.

 

Logical connection to one or more evaluation ndings is documented.

 

Insights pertinent to the object and purpose of the evaluation and the knowledge interest of evaluation users is given.

 

ADA’s basic principles, commitment to cross-cutting issues, the human rights based approach and other approaches, such as the con ict sensitive approach, as relevant, are re ected in their formulation.

 

Firm basis on evidence and conclusions is traceable.

 

Relevance to the object and purpose of the evaluation is given.

 

The target group for each recommendation is identi ed.

 

Language is concise and clear, content is actionable and re ective of an understanding of the commissioning organisation and key intended users and potential constraints as to follow-up.

 

Number is reasonable to allow for a manageable management response.

 

Aspects related to equality and human rights aspects are adequately re ected.

 

8. Annexes

Presentation of evidence along assessment grid per evaluation question

 

Instruments for data collection

 

List of interview partners (anonymised)

 

Bibliography

 

Evaluation ToR

 

Additional annexes as deemed useful

 

50

ANNEX 7

Template for Evaluation Matrix

ANNEXES

D This template is designed to help structure and document how an evaluation will go about answering the questions. The evaluation matrix is instrumental for setting the scene for an adequate and realistic evaluation and forms an integral element of the inception report. The below template can be adapted and expanded by the external evaluator(s) as relevant, but must at least contain relevant information captured in the four columns below.

D The Evaluation Matrix template (in Excel format) can be downloaded from the ADA website.

EVALUATION QUESTION

INDICATORS

SOURCES

DATA COLLECTION METHODS

Evaluation criterion

1

   

2

   

3

   

Etc.

   

Evaluation criterion

1

   

2

   

3

   

Etc.

   

Etc.

Etc.

   

51

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

ANNEX 8

Template for Feedback Matrix

D This template is designed to help the evaluation manager collect written feedback from relevant stakeholders involved in commenting the draft inception report and the draft evaluation report
of a programme or project evaluation. It also serves to document the evaluator(s) reaction and justi cation should the feedback not be incorporated and helps keep the review process and feed back loop transparent.

D The Feedback Matrix tem- plate (Excel format) can be downloaded from the ADA Website.

52

COMMENT PROVIDED BY TO REPORT SECTION PAGE ABOUT COMMENT TEXT REACTION EVALUATOR(S)

ANNEX 9

Results Assessment Form (RAF)

ANNEXES

D The Results Assessment Form is a mandatory annex to the evaluation report of ADA PP evaluations. It serves to help ADA to assess how integrated results based management is implemented at the level of programs/projects and make an assessment of the extent to which programmes and projects contribute to the achievement of results at di erent lev- els.86 Part 1 of the RAF needs to be lled in by the evaluation manager and the ADA PPM, while part 2 needs to be lled in by the evaluator(s).

D The RAF template (Excel format) can be downloaded from the ADA website.

FOR THE EVALUATION MANAGER AND ADA PPM TO FILL IN (PART 1)

 

PP Title:

 

ADA PP Number:

 

ADA Organisational Unit managing the PP:

 

CRS Code/s:

 

Country/Region of PP:

 

Evaluation Manager:

 

Project Budget:

 

FOR THE EVALUATOR(S) TO FILL IN (PART 2)

 

Evaluation company/ evaluator:

 

Timing of evaluation:

 

Completion date of eval- uation (xx/xx/xxxx):

 

Assessment of results – key aspects

1. The extent to which the planned output/s (as de ned in the project document/logframe/Theory of Change) has/have been achieved taking into account the causal link between inputs and outputs.

 

Score* (choose only one answer for each aspect assessed)

 
 

Justify score. Include nding and reference page/s in evaluation report.

 

2. The extent to which the planned outcome/s (as de ned in the project document/logframe/Theory of Change) has/have been achieved taking into account the causal link between outputs and outcomes.

 

Score* (choose only one answer for each aspect assessed)

 
 

Justify score. Include nding and reference page/s in evaluation report.

 

3. The extent to which the PP contributed to the objectives at impact level (as de ned in the project document/logframe/ToC).

 

Score* (choose only one answer for each aspect assessed)

 
 

Justify score. Include nding and reference page/s in evaluation report.

 

4. The extent to which the outputs, outcomes and impact achieved contributed to results related to the relevant cross-cutting issues. Please add a justi cation for each relevant cross-cutting issue

 

Score* (choose only one answer for each aspect assessed)

 
 

Justify score. Include nding and reference page/s in evaluation report.

 

5. Have the right approaches – with a view to implementing ADA’s overarching principles – been adopted to ensure results achievement?

 

Score* (choose only one answer for each aspect assessed)

 
 

Justify score. Include nding and reference page/s in evaluation report.

 

* A drop-down list with the scoring scale is provided in the RAF template in Excel format available on the ADA website.

86 See ADA 2015:45.

53

GUIDELINES FOR PROGRAMME AND PROJECT EVALUATIONS

ANNEX 10

Template for Management Response (MR)

D This template is designed to help evaluation managers develop
a management response and track its implementation. When

a recommendation is not or only partially accepted, it is necessary to provide an explanation.

D The implementation status should be regularly monitored (at least every six months) and document- ed in the template. The evalua- tion manager is responsible for coordinating the development

of the MR in consultation with relevant stakeholders involved
in the implementation of recom- mendations and related measures de ned in the MR.

D The MR template can be down- loaded from the ADA website.

54

Recommendation fully accepted (B)

Recommendation partially accepted (B)

Recommendation not accepted (B)

Timeline for implementation (E)

Recommendation of the evaluation (insert number and brief title) (A)

Recommendation partially accepted or not accepted, please, explain reasons (C)

Recommendation fully or par- tially accepted: please, de ne concrete measures to implement the recommendation (D)

Name of the organisation and, department respon- sible for the implementa- tion of the recommenda- tion (F)

Current status of the implementation and date (G)

1.

Yes/ Yes/ Yes/ No No No

2. 3. etc.

ANNEX 11

Bibliography