There has been considerable thought process involved to actually make monitoring practices generate outcomes results and M&E practitioners have actually and largely started adopting this and contemporary approaches towards generating quick, efficient and well thought method of gathering evidence not just to verify or assess the progress made but to build evidence for outcomes – change that has emerged not induced towards the work done. Few years back this would sounded insane to practitioners and researchers -how can training data help to gauge to outcome progress and many such questions. Frankly, very less people who believes in it years back actually take a risk to practice. It sounded like a taxi driver agreeing for a side with wrong fuel in tank. Things are changing and change is rapid, technology is now becoming deciding factor and also demanding one . It is like asking an research organisation with 100000 interviews to conduct in two weeks and replace paper with tablets. If ones asks is it possible, I would agree first, yes. How challenging , easy if end users gets simple yet equipped enough to capture, track or monitor progress of outcome indicators which is lately delayed. We practitioners tend to collect too much of monitoring data, activity to activity, one of solution to integrated outcomes evaluation with monitoring would be pick right dimensions or variables who actually help to feed in to more to outcome progress than talk about numbers at output level. Take the case of a training course (series of training’s planned for participants, ToTs, Master trainers) in such events we do collect pre and post training evaluation data and l come out with a individual values for all all participants and one cumulative score for each training, just by a small change in approach we could also collect progress of outcomes if data for dimensions of that particular outcome is in sync with monitoring data collection incident. There may be cases were all dimensions of outcomes won’t exactly fit in to monitoring data collection process, in that case an addition of section with pre or post evaluation would help us gauge the progress sufficient to understand trends and patterns and measure pulse of key dimensions of outcome indicators if not all.
The methodology is simple, output and outcome dimensions need to defined clearly and then identify the ones who best fit for pre and post revelation test and others who match for joint data collection. Not just efforts are reduced, saving time and other resources this small change in approach of thinking have improved the data collection both for output and outcome indicators simultaneously and feeding in concretely for management decision making and make quick adjustments. Otherwise, doing it separately for outputs and outcome indicators would delay results thereby decisions for adjustments, resource demand, timeliness of management decisions and acceptance. For example, a fact sheet, handout or a white paper prepared based on routine monitoring data would likely have more acceptance by advocates of policy change for a simple reason that in each meetings they get to see new sets and patterns of data creating a much factor to think about policies and regulatory frameworks and if this is actually the first step towards a change or amendment in such policies.
While monitoring data has a power to create a founding step by persuading policy and decision makers having access to different data sets more often thereby creating a push factor. This together with periodic impact data (based on performance of key outcome indicators) would create a reinforcing effect not just on performance measurement and also about evidence building for outcomes and impact.