7 Validation, Constraint, and Logic Checks you must enforce in a CAPI App
February 25, 2021Field Survey Team Training: A Good Practices Guide for Research Managers, Field Managers & Field Executives
March 24, 2021Evaluation: Top 3 Common Myths about Evaluation Busted!
You cannot believe how many times we come across the following myths that often stop many organizations from maximizing their impacts in an evidence-based manner. Let me bust the top three most common myths!
Myth #1: Evaluations are done after the programme is over

If you haven’t done evaluations before starting the programme (called prospective evaluations), you risk playing catch up with local contexts and needs that may be different from your last programme. Similarly, you do need formative evaluation to understand if your processes are indeed effective and efficient. And whether you will be able to deliver the outcomes as per ‘expected (assumed?)’ theory of change. The summative evaluations are done after a programme completes to demonstrate success (if any!) and to generate lessons for the next programming cycle. Watch our video on Type of Evaluation to learn more.
Myth #2: Evaluation, impact assessment, and impact evaluations are all the same
Yes and No. The evaluation community is perhaps guilty for using these terms interchangeably. At NEERMAN, we try to distinguish between these three based on most recent and most agreed upon understanding across different sectors and schools of thoughts. An evaluation is a broad general term and can encompass all types and stages of evaluations as we discussed above. Impact evaluation is a summative evaluation which answers a causal question of “attributable change caused by the programme”. Impact Assessment can actually be done before (prospective) or after (summative) a programme to assess what higher order impacts are possible/delivered by the outcomes achieved under the programme. These impacts can be intended or unintended, positive or negative.
But hey! As long as you don’t get caught up in terminology and focus on “asking the right questions”, you will do an evaluation that is helpful. Read more on Komal’s blog.
Myth #3: It is costly

Oh yes! Like everything else in life quality comes at a cost! But this cost pales in the comparison of the value a ‘ good’ evaluation can deliver:
(a) ensure programme was designed for success; and
(b) demonstrate success when you actually delivered it.
The larger the success you (want to) claim or larger the ‘stakes’, the higher is the demand on objectivity and rigor on the method. So, we must balance the cost and value of the evaluation:
(a) intended use of the findings – if you need to convince only internal board or just comply with some regulatory or funding organization’s requirement, then you can perhaps be okay with less academically rigorous designs;
(b) cost of getting a wrong answer – These costs are typically intangible such as reputational risks, not getting additional or large funding; and in some cases causing more harm than good!
(c) cost of getting the right answer – These costs are mainly driven by the design of the evaluation, who conducts the evaluation (internal vs external and caliber of the evaluator), type of measurements (subjective versus objective), accuracy needed (large sample size).
Many evaluations can be done internally by implementing organizations with handholding support from experts. You can schedule a FREE micro-consulting with us to get any clarity on evaluation methods!