Monitoring and Evaluation (M&E) is a key component of CFIC operations. While monitoring are usually routine tracking of project activities for efficiency, progress and performance, evaluation is an x-ray of project performance and impact at key points in the project implementation process. Our unique skills in M&E derives from our prowess in using both qualitative and quantitative methods, and our versatile experience conducting M&E in several countries in Africa, Asia, and Latin America.

MONITORING
- Performance Indicator
- Logical Framework Analysis
- Data Quality Assessment (DQA)
- District Health Information System (DHIS)
- Project Management
An important component in project monitoring is the development of indicators which are used to measure project performance. Performance indicators are largely formulated based on project objectives, and they should be SMART (Specific, Measureable, Achievable, Realistic, and Time specific). While some indicators need to be reported daily based on routine data, others may be reported less often depending on the number of occurrence of the indicators, and how the importance or sensitivity to project implementation.
This is a diagrammatic representation showing logical connections between inputs, outputs and outcomes, indicators, with verification and assumptions. In essence the logical framework answers the following questions: What are we trying to accomplish and why (goals, purpose, and outcomes), how do we measure success during implementation (performance indicators and verification), what other conditions must exist (assumptions), and how do we get there (inputs). The logical framework thus, summarises a project work plan and the conditions for implementation. At MMC we design logical framework and advise our clients on a routine bases about implementation strategies, and risk diagnostics.
Data quality assessment (DQA) is a way of checking data collected in a project or organization for dimensions such as; integrity, soundness, accuracy, reliability, precision, completeness, timeliness, and confidentiality. An important step in DQA is deciding and agreeing on key project or organizational performance indicators that will be examined for the dimensions of quality. DQA comprised of two main components:
- Assessment of data management, and reporting system on the one hand, and
- Verification of reported data in the field or at the point of activity implementation
At CFIC we have implemented several DQAs through the Nigeria Monitoring and Evaluation Management Services (NMEMS), which is The Mitchell Group, Inc. outfit in Nigeria. The NMEMS interface between USAID/Nigeria and their implementing partners to ensure data quality in reporting key performance indicators.
This is a system for ensuring data quality of reporting of health and related performance indicators. The goal is to develop and implement a unifying health information system that is efficient right from the grassroots units in local government (LGAs) to the state level, and the national level in the case of a three tier system of governance. MMC staff have been trained using the USAID reporting software, and have participated in training health personnel at various levels of governance.
CFIC have seasoned consultants in project and organizational management. We provide technical expertise on management process drawing from contemporary theories such as: systems, chaos, contingency and other theories to meet specific needs of our clients. Key aspects of our management approaches are; vision refocusing, risk-diagnostics, problem diagnostics and solutions, tasks alignment and resources for maximum effectiveness.
EVALUATION
- Baseline Evaluation
- Mid-Term Evaluation
- End-of-Project Evaluation
- Process Evaluation
- Impact Evaluation
- Cost-Benefit Analysis
- Cost-Effectiveness Analysis
Baseline evaluation is conducted to establish benchmarks or starting points based on project parameters for measuring performance. Baselines are usually established using quantitative methods such as surveys and experimental designs and qualitative methods including focus group discussions, key informant interview, and observation techniques.
Mid-term evaluation conducted to examine performance at mid-point of a project implementation with a view to providing feedback on implementation strategies for better performance. Mid-term evaluation ideally employs the same or similar performance indicators as those in baseline to ensure that any observable change can be attributable to program implementation.
End-of-Project evaluation are conducted usually at the close-out of a project to examine whether objectives have been achieved as planned, including key achievements, challenges and lessons learnt. Focus is on comparing end-of-project with baseline or mid-term indicators to ascertain changes and performance due to exposure to project strategies. Similar parameters or indicators as those of baseline or mid-term are employed at end-of-project to control as much as possible for design effects. In situations where baselines were not established at take-off of a project, ex-post facto designs are employed at end-of-project using retrospective questions.
Process evaluation is more involved with the different stages along a project implementation cycle with special focus on inputs and outputs performance. It triangulates routine data with those from commissioned or other sources including qualitative and quantitative. Process evaluation provides insight on how efficient inputs are in achieving stated outputs of a project or an organization.
Impact evaluation is conducted to ascertain permanent changes that may have occurred due to project implementation in a sub-group or community. It focuses more on impact indicators than on project outcomes indicators. The purpose is to evaluate successes and failures of a project, sustainability, strategies review, and changes or transformation on the target population. It employs quasi-experimental designs including case-control designs, or ex-post control designs or ex-post designs as may be expedient. Impact evaluation is usually recommended after some time has elapsed since a project was implemented. CFIC consultants have implemented several impact evaluations triangulating between quantitative and qualitative methods.
Cost-benefit analysis examines cost of implementing a project in monetary terms and the benefits (or consequences) also in monetary terms. It examines the differences between valuated cost and benefits to determine whether benefits outweigh the cost of the project. Cost-benefit ratios i.e. ratio of benefits to cost are measurements for ranking performance on the project. It will reply on data from survey and routine monitoring data on project indicators in a study population. Key CFIC experts are knowledgeable about cost-benefit analysis, and cost-effectiveness analysis and their distinctions and data requirements.
Cost-effectiveness analysis is employed to compare projects with similar outcomes within and across sub-groups. Data analysis reply on specific sources including routine surveillance data on incidence and prevalence of an event, and household survey data for baseline and follow-on years. Calculations normally include cost-effectiveness ratios and quality adjusted life years (QALY) for key project interventions. Cost-effectiveness ratios can be calculated in terms of cost of a proposed intervention strategies vis-à-vis other approaches in the past.
Cost-Benefits, and Cost-Effectiveness Indicators in Health Project
Note: B1* = measured in “natural units” i.e. life-years or other measure of benefits
Measures Cost Benefits Indicators
Cost-benefit C1 B1 B1 - C1
Cost-benefit ratio C1 B1 B1 / C1
Cost-effectiveness C1 B1* B1* - C1
Cost-effectiveness ratio C1 B1* B1* / C1
QALY P1Q1 = a probability function indicating quality of life L2 = persons-years L2(P1Q1) (Adjusting for disease related morbidity and mortality)