Public Sector Operations

Good Practice Standards on Evaluation of Public Sector Operations

Background

Jump to the Good Practice Standards (GPS) on Public Sector Operations Evaluation

Formulation Process

The GPS on evaluation of public sector operations, prepared by the ECG Public Sector Evaluation Working Group (WGPUB), were first adopted in 2002.[1]  This followed a series of workshops and analyses designed to take stock of current practices among the International Financial Institution (IFIs) and identify good practices that should be promoted.[2]  In 2007 an effort was made to benchmark ECG members against these standards, but the consultant undertaking that work ultimately recommended that the GPS themselves needed to be updated before a meaningful benchmarking could be completed.[3] 

The process of revising the 2002 GPS began with a stocktaking of current IFI practices in evaluation of public sector operations, conducted in 2010.[4]  At its November 2010 meeting in London, the ECG WGPUB reviewed and discussed the findings of the stocktaking paper to: (a) assess current practices of ECG members in relation to the current public sector evaluation GPS; and (b) identify key issues for consideration in the revision of the GPS, based on the comparison of current practices of the members and interviews with ECG member evaluation staff. 

The consensus of the meeting was that the public sector evaluation GPS should be revised, building on the stocktaking paper and the model of the ECG Private Sector Evaluation Working Group (WGPSE) GPS in identifying core principles that are the basis for harmonization for ECG members, while also spelling out options in terms of operating procedures that would allow some flexibility in implementation.  The work was guided by an advisory group with a representative of each member of the ECG WGPUB.

A draft revision of the ECG public sector evaluation GPS was circulated and subsequently discussed by the WGPUB at the March 2011 meeting of the ECG in Manila.  The meeting generated additional suggestions for improvements and the public sector evaluation GPS was updated in its third draft, dated May 2011.[5]  The question remained, however, as to whether the draft GPS as designed and articulated would be workable for benchmarking members' practices.  To this end, the Working Group commissioned a pilot benchmarking exercise that was undertaken at the World Bank and African Development Bank, and reported on at the November 2011 ECG meeting.[6]  Based on that exercise, members agreed on final revisions to these GPS.

Objectives and Organization

ECG's Good Practice Standards for the Evaluation of Public Sector Operations aim mainly to: (i) establish standards for the evaluation of IFI interventions that meet good evaluation practices generally accepted in the evaluation literature and backed by the experience of ECG members; and (ii) facilitate the comparison of evaluation results across ECG members, including the presentation of results in a common language.  The GPS also attempt to improve the identification and dissemination of best practices in evaluation; and improve the sharing of lessons from evaluation. The standards are applicable to projects supported by IFI investment loans, technical assistance loans, and policy-based lending.  The GPS that define more effective linkages between independent and self-evaluation are presented in Chapter on Self-Evaluation.

The goal of documenting these standards is to harmonize evaluation practice among the ECG members, not to evaluate their evaluation functions. ECG has developed a separate set of standards for the assessment of the evaluation functions of international financial institutions.[7]

These GPS are organized into three sections, dealing with, report preparation and processes, evaluation approach and methodology, and dissemination and utilization. The Preparation and Processes section contains standards related to the planning, timing, coverage, selection, consultation, and review of evaluation reports. The Evaluation Approach and Methodology section contains standards relating to the objectives that form the basis of evaluations, as well as evaluation criteria and ratings. The Dissemination and Utilization section includes CED reporting and disclosure standards.

Within each topic area, the GPS groups the standards under a number of Evaluation Principles (EPs) which articulate the concept or purpose underlying the standards (the "what”). The EPs on public sector evaluation are composed of 7 standards and 27 elements. Each EP is supported by one or more "Operational Practices” (OPs) that describe the policies and procedures that would normally need to be adopted in order to be deemed consistent with the respective EP (the "how”). Unless otherwise noted, EPs and OPs apply to investment loans, technical assistance loans, and policy-based lending (PBL). The summary of EPs and OPs are presented below:

Summary of Standards and Elements on EPs and Number of OPs on Evaluation of Public Sector Operations

Evaluation Principles Number of OPs
Standards Elements
A. Report Preparation and Processes
 1. Timing A.  Performance Evaluation Reports (PERs)

2

 2. Coverage and Selection A.    Accountability and Learning

1

B.    Sample Size

2

C.    Additional Sample Size

1

D.    Sampling Methodology

3

 3. Consultation and Review A.    Stakeholders' Consultation

3

B.    Review

3

B. Evaluation Approach and Methodology
 4. Basis of the Evaluation A.    Objective-based.

8

B.    Project Objectives Used in Assessments

1

C.    Unanticipated Outcomes

3

D.    Evaluations of PBLs

2

 5. Criteria A.    Scope of Evaluation

2

B.    Relevance

7

C.    Effectiveness

3

D.    Intended Outcomes

4

E.    Efficiency

6

F.    Sustainability.

4

G.    IFI Performance

2

H.    Borrower Performance

2

 6. Ratings A.    Criteria Rating

2

B.    Rules

2

C.    Aggregate Project Performance Indicator (APPI)

6

C.   Dissemination and Utilization
 7. Dissemination and Utilization A.    Synthesis Report

5

B.    Accessibility of Evaluation Results

3

C.    Disclosure

2

D.    Dissemination

1

E.    Utilization of Recommendations

3

Total no. of standards: 7 Total no. of elements:  27 Total no. of OPs:  83

Source: GPS on Evaluation of Public Sector Operations, Revised Edition, 2012.

In addition to the EPs and OPs, these GPS on evaluation of public sector operations also provide the following documents in the last section of this Chapter: (i) a guide on benchmarking against the GPS (Annex III.1); (ii) a note on impact and impact evaluation in the GPS (Annex III.2); and (iii) Guidance Notes that provide detailed options in three areas:  attributing outcomes to the project, analyzing project efficiency, and special considerations for evaluating Policy-Based Lending (PBL) (Annexes III.3, III.4, and III.5).

 


[1]  ECG, Good Practice Standards for Evaluation of MDB Supported Public Sector Operations, 2002.  Standards covering policy-based lending later were added as an annex. 

[2]  Hans Wyss, Harmonization of Evaluation Criteria: Report on Five Workshop, prepared for the Evaluation Cooperation Group, Washington, DC, 1999; John Eriksson, Review of Good Practice and Processes for Evaluation of Public Sector Operations by MDBs, prepared for the Working Group on Evaluation Criteria and Ratings for Public Evaluation of the Evaluation Cooperation Group (ECG), Washington, DC, 2001.

[3]  V. V. Desai, "Benchmarking of MDB Evaluation Systems Against the GPS for Public Sector Operations,‚Äù 2007.

[4] Kris Hallberg, Multilateral Development Bank Practices in Public Sector Evaluation. Final Report, March 3, 2011.

[5] Kris Hallberg, "Good Practice Standards for the Evaluation of Public Sector Operations: 2011 Revised Edition.‚Äù  Third Draft, May 22, 2011.

[6] Patrick G. Grasso, "Benchmarking Pilot for the Draft Public Sector GPS,” 2011. The pilot was based on a review of evaluation guidelines of the two IFIs, as opposed to reviewing performance.

Good Practice Standards on Evaluation of Public Sector Operations

Report Preparation and Processes

Evaluation Principle

(Standards and Elements)

Operational Practices

Notes

1. Timing

A. Performance Evaluation Reports (PERs): Subject to the constraints and specific needs of the Central Evaluation Department (CED), PERs are scheduled to ensure that sufficient time has elapsed for outcomes to be realized and for the sustainability of the operation to be apparent.

1.1   PERs are scheduled to ensure that sufficient time has elapsed for outcomes to be realized, recognizing that outcomes higher in the results chain may take more time to materialize. PERs may be conducted before project closing if needed to inform the design of subsequent operations or to provide case studies for higher-level evaluations ‚Äì but if this is done, the project is not rated.

 

1.2    Policy-Based Loans (PBL) in a series are evaluated at the end of the series.

Relevant for IFIs that provide PBLs.

2. Coverage and Selection

A.  Accountability and Learning: The CED has a strategy for its mix of evaluation products that balances the two evaluation functions of accountability and learning.

2.1    The mix of completion report (CR) validations and PERs reflects the need for both accountability and learning, taking into account the quality of the international financial institution's (IFI) CR's, the CED's budget, and the size of the population of projects ready for evaluation.

CEDs may differ in the relative emphasis they place on the two functions (accountability and learning).

B. Sample Size of Projects: For purposes of corporate reporting (accountability), the CED chooses a sample of projects for a combination of CR validations and PERs such that the sample is representative of the population of projects ready for evaluation.

 

 

3.1    The sample size for a combination of CR validations and PERs is sufficiently large to ensure that sampling errors in reported success rates (effectiveness ratings or aggregate project performance indicator [APPI] ratings) at the institutional level are within commonly accepted statistical ranges, taking into account the size of the population of operations ready for evaluation.

 

 

3.2    If the sample for CR validations and PERs is less than 100% of the population of CRs and projects ready for evaluation, a statistically representative sample is selected.  If the annual sample has too large a sampling error or the population is too small to yield reasonable estimates, the results from multiple years can be combined to improve the precision of the results.

A stratified random sample may be chosen.  Examples of strata are regions, sectors, and types of operations.

C. Additional Sample Projects: If an additional purposive sample of projects is selected for learning purposes, it is not used by itself for corporate reporting.

 

 

4.1    In cases where an additional purposive sample of projects is selected for PERs independent from a statistically representative sample used for corporate reporting, the PER ratings are not included in aggregate indicators of corporate performance.

Relevant for IFIs that choose an additional purposive sample of projects for evaluation.  Examples of selection criteria are:  potential to yield important lessons; potential for planned or ongoing country, sector, thematic, or corporate evaluations; to verify CR validation ratings; and areas of special interest to the Board.

D. Sampling Methodology:  The sampling methodology and significance of trends are reported.

5.1    The CR validation sample and the PER sample are set in the CED's annual work program. Ratios and selection criteria are clearly stated.

 

5.2    In corporate reporting the confidence intervals and sampling errors are reported.

 

5.3    The significance of changes in aggregate project performance and how to interpret trends are reported.

 

3. Consultation and Review

A. Stakeholders' Consultation: Stakeholders are consulted in the preparation of evaluations.

6.1    PERs are prepared in consultation with the IFI's operational and functional departments. The criteria for selecting projects for PERs are made transparent to the stakeholders.

 

6.2    As part of the field work for PERs, the CED consults a variety of stakeholders. These may include borrowers, executing agencies, beneficiaries, NGOs, other donors, and (if applicable) co-financiers.

 

6.3    The CED invites comments from the Borrower on draft PERs. Their comments are taken into account when finalizing the report.

 

B. Review: Draft evaluations are reviewed to ensure quality and usefulness.

7.1    To improve the quality of PERs, draft PERs are peer reviewed using reviewers inside and/or outside the CED.

 

7.2    To ensure factual accuracy and the application of lessons learned, draft PERs are submitted for IFI Management comments.

 

7.3    To ensure factual accuracy and the application of lessons learned, draft CR validations are submitted for IFI Management comments.

 

 

Good Practice Standards on Evaluation of Public Sector Operations

Evaluation Approach and Methodology

Evaluation Principle

(Standards and Elements)

Operational Practices

Notes

4. Basis of Evaluation

A. Objective-based: Evaluations are primarily objectives-based.

8.1    Projects are evaluated against the outcomes that the project intended to achieve, as contained in the project's statement of objectives.

International financial institutions (IFI) may choose to add an assessment of the achievement of broad economic and social goals (called "impacts” by some IFIs) that are not part of the project's statement of objectives. If such a criterion is assessed, it is not included in the calculation of the aggregate project performance indicator (APPI) (i.e., it falls "below the line). See also evaluation principle (EP) #3C and operational practice (OP) # 22.1 and # 22.2.

8.2    Broader economic and social goals that are not included in the project's statement of objectives are not considered in the assessment of Effectiveness, Efficiency, and Sustainability. However, the relevance of project objectives to these broader goals is included as part of the Relevance assessment.

 

8.3    The project's statement of objectives provides the intended outcomes that are the focus of the evaluation. The statement of objectives is taken from the project document approved by the Board (the appraisal document or the legal document).

 

8.4    If the objectives statement is unclear about the intended outcomes, the evaluator retrospectively constructs a statement of outcome-oriented objectives using the project's results chain, performance indicators and targets, and other information including country strategies and interviews with government officials and IFI staff

 

8.5    The focus of the evaluation is on the achievement of intended outcomes rather than outputs. If the objectives statement is expressed solely in terms of outputs, the evaluator retrospectively constructs an outcome-oriented statement of objectives based on the anticipated benefits and beneficiaries of the project, project components, key performance indicators, and/or other elements of project design.

Intended outcomes are called "impacts” by some IFIs.

 

Evaluations of countercyclical operations also focus on the achievement of outcomes. The intended outcomes may need to be constructed from sources of information other than the project documents, including interview evidence from government officials and IFI staff.

8.6    If the evaluator reconstructs the statement of outcome-oriented objectives, before proceeding with the evaluation the evaluator consults with Operations on the statement of objectives that will serve as the basis for the evaluation.

 

8.7    The anticipated links between the project's activities, outputs, and intended outcomes are summarized in the project's results chain. The results chain is taken from the project design documents. If the results chain is absent or poorly defined, the evaluator constructs a retrospective results chain from the project's objectives, components, and key performance indicators.

Intended outcomes are called "impacts” by some IFIs.

 

8.8    Policy-based lending (PBL) evaluations focus on the program of policy and institutional actions supported by the PBL, and the resulting changes in macroeconomic, social, environmental, and human development outcomes. The PBL's intended outcomes are taken from the program's statement of objectives and results chain.

Relevant for IFIs that provide PBLs.

B. Project Objectives used in Assessments: If project objectives were revised during implementation, the project is assessed against both the original and the revised objectives.

9.1    If project objectives and/or outcome targets were changed during implementation and the changes were approved by the Board, these changes are taken into account in the assessment of the core criteria. The central evaluation department (CED) defines a method for weighting the achievement of the original and revised objectives in order to determine the assessment of the core criteria.

The CED may apply the same method to projects with changes in objectives and/or outcome targets that were not approved by the Board. The evaluator may need to judge whether such changes were valid.

 

Options for weighting include (i) using the original and revised objectives by the share of disbursements before and after the restructuring; (ii) weighting by the share of implementation time under each set of objectives; and (iii) weighting by the undisbursed balances on the loan before and after restructuring.

C. Unanticipated outcomes: The evaluation includes consideration of unanticipated outcomes.

10.1     Unanticipated outcomes are taken into account only if they are properly documented, are of significant magnitude to be consequential, and can be plausibly attributed to the project.

Unanticipated outcomes are called "unanticipated impacts” by some IFIs.

 

Unanticipated (or unintended) outcomes are defined as positive and/or negative effects of the project that are not mentioned in the project's statement of objectives or in project design documents.

 

Excluding consideration of unanticipated outcomes in the Effectiveness and Sustainability assessments ensures the accountability of the project for effective and sustainable achievement of its relevant objectives.

10.2     Unanticipated outcomes are taken into account in the assessment of Efficiency. The calculation of the project's ex post ERR includes unanticipated positive outcomes (by raising benefits) and unanticipated negative outcomes (by raising costs). The assessment of the project's cost-effectiveness includes unanticipated negative outcomes (by raising the costs of achieving the project's objectives).

 

10.3     Unanticipated outcomes, both positive and negative, are discussed and documented in a separate section of the evaluation.

 

D. Evaluation of PBLs: Evaluations of PBLs assess the performance of the reform program as a whole.

11.1     Evaluations of a programmatic series of PBLs assess the performance of the entire program (the series) in addition to assessing and rating the individual operations in the series.

Relevant for IFIs that provide PBLs.

11.2     PBL evaluations assess the results of the overall program, regardless of the sources of financing.

Relevant for IFIs that provide PBLs.

5. Criteria

A. Scope of Evaluation: Evaluations encompass all performance attributes and dimensions that bear on the operation's success.

 

12.1     Investment and technical assistance operations are assessed according to a minimum of six criteria: four core criteria related to project performance (Relevance, Effectiveness, Efficiency, and Sustainability) along with IFI Performance and Borrower Performance. This applies to both completion report (CR) validations and performance evaluation reports (PER).

Definitions of the criteria are given in the Glossary of Terms.

 

IFIs may choose to assess additional criteria such as the quality of the CR, the quality of the project's monitoring and evaluation framework, social impacts, environmental impacts, institutional development impact, etc.

 

12.2   PBLs are assessed according to a minimum of five criteria: three core criteria related to project performance (Relevance, Effectiveness, and Sustainability) along with IFI Performance and Borrower Performance. This applies to both CR validations and PERs.

Relevant for IFIs that provide PBLs.

 

IFIs may choose to assess additional criteria such as the quality of the CR, the quality of the project's monitoring and evaluation framework, social impacts, environmental impacts, institutional development impact, etc.

 B. Relevance; The assessment of Relevance covers both the relevance of objectives and the relevance of project design to achieve those objectives.

13.1   The relevance of objectives is assessed against beneficiary needs, the country's development or policy priorities and strategy, and the IFI's assistance strategy and corporate goals. Projects dealing with global public goods also assess relevance against global priorities.

For further guidance on assessing Relevance for PBLs, see Guidance Note 3 (Annex III.5).

13.2   The assessment also considers the extent to which the project's objectives are clearly stated and focused on outcomes as opposed to outputs.

 

13.3     The realism of intended outcomes in the country's current circumstances also is assessed.

 

13.4   The relevance of design assesses the extent to which project design adopted the appropriate solutions to the identified problems. It is an assessment of the internal logic of the operation (the results chain) and the validity of underlying assumptions.

 

13.5     The assessment also considers the relevance of modifications to project design.

 

13.6     Whether the project's financial instrument was appropriate to meet project objectives and country needs also is assessed.

 

13.7     The relevance of objectives and design is assessed against circumstances prevailing at the time of the evaluation.

 

C. Effectiveness: The assessment of Effectiveness evaluates the extent to which the project achieved (or is expected to achieve) its stated objectives, taking into account their relative importance.

14.1     The assessment of Effectiveness tests the validity of the anticipated links between the project's activities, outputs, and intended outcomes (the results chain).

Intended outcomes are called "impacts” by some IFIs.

14.2   Both the actual and the expected results of an operation are included in the assessment of Effectiveness.

 

14.3   In evaluations of PBLs, achievement of outcomes is measured against development objectives; prior actions taken and triggers met do not by themselves provide sufficient evidence of the achievement of outcomes.

Relevant for IFIs that provide PBLs.

D. Intended Outcomes: Subject to information and CED resource constraints, the assessment of Effectiveness uses appropriate methods to determine the contribution of the project to intended outcomes in a causal manner.

15.1     Outcomes are evaluated against a counterfactual. When feasible and practical, evaluations use a combination of theory-based evaluation and impact evaluation. If an impact evaluation is not feasible or practical, evaluators at a minimum use a theory-based approach, and discuss factors other than the project that plausibly could have affected outcomes.

Intended outcomes are called "impacts” by some IFIs. Other IFIs include causality in the definition of "impact”.

 

See Guidance Note 1 (Annex III.3) for a menu of quantitative and qualitative approaches to attributing outcomes to the project.

15.2     In rare cases where there are no other plausible explanations of the change in an outcome indicator other than the project, a "before-and-after‚Äù evaluation method is sufficient. In these cases, the evaluator presents the arguments why outside factors were unlikely to have affected outcomes.

 

15.3     In CR validations, the method used to construct a counterfactual depends on the quality of evidence in the CR. At a minimum, the evaluator uses a theory-based approach to validate the CR's conclusions regarding the links between project activities, outputs, and outcomes. Other non-project factors that plausibly could have contributed to observed outcomes are discussed.

 

15.4     PBL evaluations attempt to separate the effects of the program supported by the PBL from the effects of other factors.

Relevant for IFIs that provide PBLs.

See also Guidance Note 3 (Annex III.5).

E. Efficiency: The Efficiency assessment attempts to answer two questions: (i) Did the benefits of the project (achieved or expected to be achieved) exceed project costs; and (ii) Were the benefits of the project achieved at least cost?

16.1     To address the first question (Did the benefits of the project, achieved or expected to be achieved, exceed project costs?), cost-benefit analysis is carried out to the extent that data is available and it is reasonable to place a monetary value on benefits. An ERR higher than the opportunity cost of capital indicates that project was a worthwhile use of public resources. Therefore, when an ERR is calculated, it would normally need to be greater than the opportunity cost of capital for a fully satisfactory assessment of Efficiency. Other thresholds may be used‚Äîvarying for example by sector‚Äîbut if so are explicitly defined by the CED.

See Guidance Note 2 (Annex III.4) for further detail on options for assessing Efficiency.

Note that Efficiency is assessed for investment and technical assistance (TA) loans but not for PBLs (see OPs # 12.1 and # 12.2).

 

16.2     The methodology and assumptions underlying the calculation of an economic rate of return (ERR) or net present value (NPV) are clearly explained and transparent. Ex post estimates are compared with the ex ante estimates in the project documents.

Relevant when ERRs/NPVs are estimated.

16.3   Sensitivity tests on ERRs based on possible changes in key assumptions are carried out. These assumptions reflect any concerns in the assessment of Sustainability.

Relevant when ERRs/NPVs are estimated.

16.4     To address the second question (Were the benefits of the project achieved at least cost?), cost-effectiveness analysis is carried out. The analysis considers the cost of alternative ways to achieve project objectives, unit costs for comparable activities, sector or industry standards, and/or other available evidence of the efficient use of project resources.

 

 

16.5     In addition to the traditional measures of efficiency (cost-benefit analysis and cost-effectiveness analysis), the Efficiency assessment considers aspects of project design and implementation that either contributed to or reduced efficiency. For example, implementation delays‚Äîto the extent they are not already captured in the evaluation's cost-benefit or cost-effectiveness analysis‚Äîwould have an additional negative impact on Efficiency.

 

16.6     For evaluations of TA operations, if project design includes a pricing policy or pricing guidelines for TA, the Efficiency assessment considers the degree to which these policies were implemented.

Relevant for IFIs that provide lending for TA.

F. Sustainability: The assessment of Sustainability is based on the risk that changes may occur that are detrimental to the continued benefits associated with the achievement or expected achievement of the project's objectives, and the impact on that stream of benefits if some or all of these changes were to materialize.

17.1     The Sustainability assessment considers several aspects of sustainability, as applicable: technical, financial, economic, social, political, and environmental. It also considers the degree of government ownership of and commitment to the project's objectives; the ownership of other stakeholders (e.g., the private sector and civil society); and the degree of institutional support and the quality of governance. The risk and potential impact of natural resource and other disasters is also considered.

 

17.2     Sustainability is determined by an assessment of both the probability and likely impact of various threats to outcomes, taking into account how these have been mitigated in the project's design or by actions taken during implementation. The evaluator takes into account the operational, sector, and country context in projecting how risks may affect outcomes.

 

17.3     The Sustainability assessment refers to the sustainability of intended outcomes that were achieved or partially achieved up to the time of the evaluation, as well as intended outcomes that were not achieved by the time of the evaluation but that might be achieved in the future. To avoid overlap with Effectiveness, Sustainability is not downgraded based on incomplete achievement of objectives per se.

Intended outcomes are called "impacts” by some IFIs.

17.4     The time frame for the sustainability assessment depends on the type of project being evaluated, but is clearly stated in the evaluation. For investment operations, the time frame for the Sustainability assessment is the anticipated economic life of the project. For PBLs, the time frame may need to be longer to encompass the persistence of results from policy and institutional actions adopted under the operation. For some types of investment projects, the starting point of the sustainability analysis may not be the time of the evaluation, but rather the start of operation of the project.

For PBLs, see also Guidance Note 3 (Annex III.5).

G. IFI Performance: The assessment of IFI Performance covers the quality of services provided by the IFI during all project phases.

 

18.1     The assessment of IFI Performance at project entry covers the IFI's role in ensuring project quality and in ensuring that effective arrangements were made for satisfactory implementation and future operation of the project. This includes:

-      the quality of the analysis conducted to identify problems and possible solutions;

-      the consideration of alternative responses to identified problems;

-      the degree of participation of key stakeholders; the use of lessons learned from previous operations;

-      the quality of risk analysis and the adequacy of proposed risk mitigation measures;

-      the adequacy of institutional arrangements for project implementation;

-      the identification of safeguards relevant to the project; and

-      the IFI's efforts to ensure the quality of the monitoring and evaluation framework.

 

 

18.2     The assessment of IFI performance during project supervision is based on the extent to which the IFI proactively identified and resolved problems at different stages of the project cycle, including:

-         modifying project objectives and/or design as necessary to respond to changing circumstances;

-         enforcing safeguard and fiduciary requirements; and

-         ensuring that the monitoring and evaluation system was implemented.

 

H. Borrower Performance: Borrower Performance assesses the adequacy of the Borrower's assumption of ownership and responsibility during all project phases.

19.1   The assessment of Borrower Performance focuses on processes that underlie the Borrower's effectiveness in discharging its responsibilities as the owner of a project, including:

-         government and implementing agency performance in ensuring quality preparation and implementation;

-         compliance with covenants, agreements, and safeguards;

-         provision of timely counterpart funding;

-         provision of timely counterpart funding; and

-         measures taken by the Borrower to establish the basis for project sustainability, particularly by fostering participation by the project's stakeholders.

 

 

19.2     The assessment covers the performance of the government as well as the performance of implementing agencies.

 

6. Ratings

A. Criteria Rating: Each of the six criteria (five for PBLs) is assigned a rating.

20.1     For Relevance, Effectiveness, Efficiency, and Sustainability, the criterion is rated on the degree of achievement, for example from "negligible‚Äù to "high‚Äù. Normally a four-point rating scale is used. Ratings may be either categories or numbers.

Additionally, ratings of "non-evaluable” and "not applicable” may be used.

20.2     For IFI Performance and Borrower Performance, the number of rating scale points is either four or six. The ratings measure degrees of satisfactory or unsatisfactory performance, for example ranging from "Highly Successful‚Äù to "Highly Unsuccessful‚Äù. The rating scale is symmetric. Ratings may be either categories or numbers.

Additionally, ratings of "non-evaluable” and "not applicable” may be used.

B. Rules: Rules for assigning criteria ratings are clearly spelled out.

21.1     If the rating for a given criterion is constructed from ratings on sub-criteria or from ratings on different elements of the criterion, the rules for the aggregation are clearly spelled out in evaluation guidelines.

For example: (i) the Relevance rating may be based on separate ratings for the relevance of objectives, design quality and preparation, institutional arrangements, and the relevance of modifications; (ii) the Effectiveness rating may be based on separate ratings for the achievement of each of the project objectives; (iii) the Efficiency rating may be based on separate ratings for overall economic and financial performance, cost-effectiveness, and timeliness of outputs and outcomes.

21.2     In evaluation reports, evaluators provide a justification for each rating.

 

C. APPI: An Aggregate Project Performance Indicator (APPI) is constructed from the core criteria.

22.1     For investment and TA loans, the APPI is constructed from the four core criteria: Relevance, Effectiveness, Efficiency, and Sustainability.

If additional (non-core) criteria are included in the evaluation (see Note to OP # 12.1 above), their ratings are not used in the calculation of the APPI (i.e., they are "below the line”).

 

A second aggregate indicator, including these additional criteria, may be constructed.

22.2     For PBLs, the APPI is constructed from the three core criteria: Relevance, Effectiveness, and Sustainability.

Relevant for IFIs that provide PBLs.

22.3     In constructing the APPI, the component criteria are normally given equal weights. The relative ratings of the core criteria are reviewed for logical consistency. If there are inconsistencies, the evaluator may choose to assign unequal weights to the component criteria, explaining the reasons behind them.

For example, for an ineffective project to have a high rating on Sustainability would be unusual. Similarly, for a project to be given a highly successful rating if its sustainability was in doubt or if its relevance was poor at project completion would be unusual.

22.4     If criteria ratings are given numerical values, the rules for constructing the APPI rating category (e.g., by rounding or by using threshold values) are clearly spelled out in evaluation guidelines.

 

22.5     For the APPI, the number of rating scale points is either four or six. The rating scale is symmetric. Ratings may be either categories or numbers.

 

22.6     If, in addition to the APPI, a second aggregate indicator is calculated, the component criteria and rules for constructing the second indicator are clearly spelled out in evaluation guidelines. Both the APPI and the second aggregate indicator are presented in corporate reports.

Relevant for IFIs that construct a second aggregate indicator.

 

Good Practice Standards on Evaluation of Public Sector Operations

Dissemination and Utilization

Evaluation Principle

(Standards and Elements)

Operational Practices

Notes

7. Dissemination and Utilization

A. Synthesis Report: The central evaluation department (CED) prepares a periodic synthesis report.

23.1    At least every three years, the CED prepares a periodic synthesis report addressed to the IFI's Management, staff, and Board. The frequency of reporting depends on the significance of changes in aggregate ratings and recommendations year-to-year.

 

23.2     The review includes a synthesis of completion report (CR) validations and performance evaluation reports (PER) produced during the period covered. The criteria and rating systems used in the evaluations are clearly spelled out. All ratings reported are those from the CED; differences in aggregate ratings between CR validations/PERs and the CRs are disclosed.

 

23.3     The CED reports periodically (at least every three years) to the international financial institution's (IFI) Board of Directors and Management on the quality of the IFI's self-evaluation system, including the application of lessons in new operations.

 

23.4     The CED's synthesis ratings are included in integrated corporate performance reporting.

 

23.5     Since the aggregate project performance indicators (APPI) for investment/technical assistance (TA) loans and policy-based loans (PBL) are based on different criteria and thus are not strictly comparable, they are reported separately in corporate performance reporting.

 

B. Accessibility of Evaluation Findings: The CED makes evaluation findings and lessons easily available to IFI staff.

24.1     The CED makes available to all IFI staff a range of user-friendly dissemination products covering all of its evaluation products along with the periodic synthesis report.

 

24.2     The CED relies primarily on its intranet website for document posting and notifies staff of new items through the corporate website.

 

24.3     The CED maintains a searchable lessons-learned system to assist Operations staff to find lessons applicable to new projects. The entries include specific lessons along with contextual material to allow the lessons to be readily applied.

 

C. Disclosure: Within the guidelines of the IFI's overall disclosure policy, the CED discloses all evaluation products.

25.1     The CED's disclosure policy for evaluation products is explicit and consistent with the IFI's general disclosure policy.

 

25.2     The CED discloses the full report of all of its evaluation products. Only in exceptional cases is some measure of confidentiality warranted. In these cases, if possible, evaluation reports are redacted and then disclosed.

Examples of exceptional cases would be (i) an evaluation of an operation with a semi-public/semi-private entity, for which the relevant disclosure standard may be that of the Private Sector GPS; and (ii) an evaluation of a PBL for which the disclosure of evaluation results would be likely to seriously compromise the process of policy change.

D. Dissemination: The CED pro-actively reaches the public with its evaluation results.

26.1     The CED has a strategy for disseminating its evaluation products according to the types of products it produces and the audiences it intends to reach: IFI staff, member governments, other client stakeholders, civil society organizations, academia, and others.

Options include evaluation summaries, inclusion of evaluation findings in IFI annual reports; hosting conferences, training sessions, and public consultations on evaluation methods and findings and methodologies; and the use of websites, public media, and social media.

E. Utilization of Evaluation Recommendations: The CED follows up on IFI Management's implementation of recommendations made by the CED.

27.1     Based on its PERs and higher-level evaluations, the CED makes recommendations to IFI Management and the Board to improve the IFI's effectiveness. These include a specific, time-bound set of actions to be taken by IFI Management which can reasonably be taken in the short term and can be monitored by IFI Management and the CED.

 

27.2     The CED maintains a tracking system for recording and following up on steps taken to respond to each recommendation that was endorsed by IFI Management.

 

27.3     The CED reports to the Board on IFI Management follow-up to its recommendations.