The effects of poverty, inequality and youth unemployment continue to pose challenges as we advance in our democracy. These socioeconomic problems have to be addressed and policy solutions found, either to eliminate them or reduce their severity.
In this regard policymakers plan policy interventions that seek to address these problems.
Informed by conceptual thinking that says: “If we provide a programme in this specific way, targeting this type of beneficiary it will bring about this outcome.” Public programmes are then devised and implemented.
These policy interventions come to life through the implementation of public programmes within the machinery of government. We then see public programmes such as the child support grant as part of the social protection system, the proposed youth wage subsidy, that seeks to find solutions to the endemic youth unemployment challenge and the upgrading of informal settlements programme that seeks to meet social housing needs.
How can we know that government programmes actually work? Evaluation of the impact of programmes is important.
The “impact agenda” calls for impact evaluations that are relevant to policy-making, that communicate evidence and provides clear policy direction. It encourages progress from the monitoring of programme outputs, towards the evaluation of outcomes and impact. This has resulted in the focus on programme impact evaluation, driven by calls for evidence of what works for whom, why, how, when and under what circumstances.
Why is it important to demonstrate impact? There is now new emerging thinking regarding the effectiveness of public policies and programmes, in terms of their relevance, their performance, “value for taxpayer money” and sustainable impact.
In this regard we need to go beyond measuring programme outputs and rather detail the outcomes and impact of the interventions on a long and sustainable basis.Government programmes are always invariably “big ticket” public programmes demanding the commitment of large budgetary resources. In the past, perfunctory periodical reporting on output spending was the norm. Increasingly now the public is demanding value for money and asking “so what”. Accountability is demanded for the vast resources spent, and for demonstration of the ultimate outcomes and impacts in the lives of the intended beneficiaries, some of which are the poorest of the poor, and the most marginalised Are we well served by one-size-fits-all methods of addressing policy interventions? While the statistical counterfactual has until recently remained the default evaluation approach in many parts of the world, including South Africa, it is not always feasible or enlightening in explaining how social programmes work. Simply knowing that the programme “works” is not good enough. Alternative methods of verifying the impact of government programmes should be considered. Useful impact evaluation should specify if the programme works, how it works and under what conditions does it work. Policy decision-makers can then use that evidence base to inform the review of current policies or the formulation of future ones. This will then facilitate programme enhancement as well as implementation of the programme in new contexts. Is it possible to transcend the blind and unquestioning adoption of the prevailing methods? Both the World Bank and the International Initiative for Impact Evaluation, some of whose founders were affiliated with the World Bank, have been powerful global organisations that have influenced the synthesising and dissemination of impact evaluation evidence from a particular and precise methodological stance in the developing world. The application of experimental design methodologies in the evaluation of social programmes, such as those applied in clinical trials, has been the hallmark of most World Bank evaluations. Similarly, the International Initiative for Impact Evaluation’s evidence base of what works in social development interventions globally, largely emulate the same methods that are applied in bio-medical fields. Therefore, these powerful organisations have influenced the trajectory of impact evaluation designs, particularly in the developing world. Against this backdrop, in the last few years there has been an increasing call for “made in Africa”, and “Africa-rooted” evaluation. There is a sensitivity that evaluation in Africa is still emergent, and methods and practices are largely drawn from the countries in the Global North. While we should adapt, refine and customise some of the predominant evaluation methods to suit the African context, African theorists and methodologists acting in their own context, should deliberately and purposefully define entirely new paradigms that are indigenously African and decolonised. • Nombeko Mbava is a PhD graduand at Stellenbosch University School of Public Leadership.