In the previous blog post, role of changing M&E practices vis-a-vis the innovations, was highlighted to draw the attention of practitioners towards simple yet highly effective and tested practices. With time, agencies and donors have given higher attention and emphasize to build evidence for programmes of any type – emergency response, economic development, civil society engagement, youth empowerment, climate change, cash for work, short-term, long-term etc. Organizations many a times, allocate resources towards building effective monitoring mechanisms for , this thinking benefit to gauge the progress on time and in some cases real time to feed the decision making for project implementation. While, monitoring plays a significant role in making an intervention responsive enough for all stakeholders especially the implementing organizations to take decisions and make appropriate changes in case of lags, mere effective monitoring process won’t be helpful unless this type of data doesn’t feed to project outcome indicators. For two reasons A) usually data for outcome level indicators is considered to be a periodic approach, hence data for outcome indicators becomes highly dependent of time periods and organizations have nothing in hand and mind other than numbers for outputs, B) for policy change evidence building should be a continuous, rapid than a periodic process (once in three months, twice in a year etc.) and decision making for policy change needs substantiated evidence (without the thickness) and few instead of continuous flow of evidence may necessarily not be so effective always. It therefore, is necessarily needs thinking when developing indicators and taking output indicators a step further than just reflecting progress.
Throughout the project timeline, I found organizations falling pray of circumstances ( something that could be well addressed at planning level is delayed at that phase) and hence amending their monitoring mechanisms to bounce back with whatever progress has been made, in some case cases conducting a rapid assessment. While this measure may prove to be a temporary solution for a lag (lack of evidence for change in project plan) this however may not be considered an effective way-out. Hence the need emerges for a highly thoughtful, integrated M&E system. An M&E system with innovative data collection and dissemination practices in place would make the role process quick, real-time. For example, data collected using a mobile application integrated with data dashboards would serve many purposes – collection, collation, analysis and sharing becomes quick and widely accessible using dashboards. This reduces the time lags which otherwise adds to delays directly impacting the decision making vis-a-vis the relevance of data. For example, if librarians need to know how public access computers in a public library are used – surfing content, time spent online, type of a user, heath of a computer, to assess the usefulness of public access computers daily, the role of a innovative software becomes critical. This quickness in information flow would help the librarians to take decisions frequently – like keeping new resources (online, offline) available for users, procure relevant books, magazines, periodicals surfed accessed online to enhance the wider access, repair computers well on time to keep them functional all the time, increase the number of public access computers. Without evidence, doing all this becomes difficult and may lead to ineffective decision making. Remember, all this could be available independent of outcome level data (usually collected periodically). While outcome level data helps to measure how has an intervention been able to make changes in people’s lives, output level data could well inform what has been the progress towards a desired change and challenges involved.
P.S. While travelling for a random filed visit, someone shared that feedback forms collected daily are shared monthly to ease the convenience of sharing. Is this actually a feedback that WORKS?