Workshop An Introduction To Quantitative Impact Evaluation Tools

Workshop An Introduction To Quantitative Impact Evaluation Tools

On May 8th, Master Program in Planing and Development Policy held workshop an introduction to quantitative impact evaluation tools was the lecturer for the session Rusā€™an Nasrudin.

Things to know about causal-inference A causal claim is a statement about what didnā€™t happen. There is a fundamental problem of causal inference: a missing counterfactual. You can estimate average causal effects even if you cannot observe any individual causal effects. If you know that, on average, A causes B and that B causes C, this does not mean that you know that A causes C. The counterfactual model is all about contribution, not attribution. X can cause Y even if there is no ā€causal pathā€ connecting X and Y. Correlation is not causation. X can cause Y even if X is not a necessary condition or a sufficient condition for Y. Estimating average causal effects does not require that treatment and control groups are identical. There is no causation without manipulation.

Impact evaluations are a particular type of evaluation that seeks to answer cause-and-effect questions. Unlike general evaluations, which can answer many types of questions, impact evaluations are structured around one particular type of question: What is the impact (or causal effect) of a program on an outcome of interest? This basic question incorporates an important causal dimension: we are interested only in the impact of the program, that is, the effect on outcomes that the program directly causes. An impact evaluation looks for the changes in outcome that are directly attributable to the program (Gertler et al. 2011).

Source of bias in non-social experiment We call these source of bias ā€˜endogeneity problemsā€™ Reverse causality Omitted variable bias or selection problem, systematic measurement errors. Estimation techniques : Randomised control Trials (RcTs) , Difference-in-difference (DiD) , Matching , Instrumental variables, Regression discontinuity design (RDD)

Choice of instrument Finding an instrument is difficult. Where to find an instrument, Policy reforms, political variables, Partially randomised or random encouragement design, Geographiccharacteristics, Irregularities in programme design (random shocks to selection).