Hiring Field Survey Enumerators: 5 Good Practices You Can Use
January 27, 2021What are SMART Indicators in Monitoring and Evaluation?
February 10, 2021Monitoring and Evaluation: Differences and Synergies between the Two
Monitoring and Evaluation (M&E) has not only become a jargon we use in one breath, but these two terms are also frequently used interchangeably. In this blog, I’ve explained the differences and synergies between monitoring and evaluation.
What is Monitoring and Evaluation?
- Monitoring is focused on tracking evidence of movement towards the achievement of specific, predetermined targets.
- Evaluation takes a broader view of an intervention, considering not only progress toward stated goals but the logic of the initiative, as well as its consequences.
- Both are needed to be able to minimize the risk of failures and maximize success, but a monitoring system is an absolute necessity for any programme, however, small.
Differences between Monitoring and Evaluation
Monitoring | Evaluation |
Routine and Ongoing – starts and tops with the programme. All programme need some monitoring so it is mandatory. | Periodic and time-bound. Usually done in three stages: prospective, formative, and summative. Evaluation can be optional. |
Answers the question “Are the things being done Right (as planned)?”. Translates objectives and process into SMART indicators and sets targets for them. | Answers broader questions from the perspective of better planning and targeting, programme design, improving implementation, and demonstrating success. |
Helps you identify failures before they happen, and enforce accountability for implementation | Helps you design for impact, manage programme for benefits/outcomes beyond contractually obligated outputs, and demonstrate success |
Ideally, ‘internal’ because it works best when brutally honest and trusted by all not to “harm” the programme objectives or its stakeholders unfairly. However, some part can be external to build accountability towards the donor and the community. | Internal, External or Participatory depending on the purpose and use of evaluation findings. |
Key skill required is excellent understanding of the programme, the local contexts, and good project management skills. Best to be done by programme team with design, analyses and interpretation support by an evaluation expert (internal or external). | Mixed methods research skills with both qualitative and quantitative research are needed. Evaluations require objectivity and skills which often requires arms-length distance from programming. Best to be led by dedicated evaluators (internal or external) |
Technology plays a vital role in developing “job-aid and monitoring” systems such that monitoring is not a “separate task” but happens as a part of implementation. Simple dashboards are easy to build and use but more advanced analyses need skills usually not available internally | Technology helps in structured data collection (computer assisted personal interviews [CAPI] surveys), but most tasks such as design, analysis, interpretation cannot be automated using technology. |
The cost can be minimal if integrated with programme implementation, technology is used ‘smartly’ (simple yet effective), done internally and expectation are kept realistic. Usually about 2-5% of project funds are sufficient. | Cost can be high depending on the questions asked, rigor of the method, and whether internal or external resources are used. |
In our experience, starting with a monitoring system or evaluations can be challenging to many, but only because of the fear of the unknown. Let me assure you that most can establish a useful and effective monitoring system on their own with just a little hand-holding support. NEERMAN has several free resources (click here to register and access the resources) and blogs available to get you started OR you can also set up a free micro-consulting with us!