Monitoring and evaluation: the overlooked linchpin

Monitoring and evaluation is often overlooked in the project cycle. This article is an preliminary discussion of the subject.

Across the Caribbean, and as highlighted in previous articles, several countries in the region either have rolled out, or are in the process of rolling out, ‘tablet computers in schools’, or one ‘laptop per student’ programmes, which aim to provide each student with individual access to a computing device to improve his/her learning experience. Due to the nature of the tools involved – tablet computers and laptops – their procurement can be quite costly to a country. However, governments often contend that the spend is justifiable if student passes and competence improve considerably, and if they are becoming better prepared for the even more tech-driven society that is evolving.

However, in report tabled in the Lower House of the Jamaica Parliament last week, the country’s Auditor General expressed concern about the lack of monitoring of the recently established tablets in school project:

In the absence of the requisite reports and proper monitoring of the programme, management may find it difficult to take timely actions on any issues that may arise…

(Source:  The Gleaner)

Launched in August 2014 with a pilot project of 25,000 tablet computers being assigned to students and teachers in 38 pre-primary, primary, secondary schools, as well as teachers training institutions across the island, Jamaica’s tablets in schools programme has been garnering considerable attention. The pilot ran for a year, with a cost of around USD 13 million, which was financed to a considerable extent from the Universal Service Fund (USF) for telecoms, which collects a tax from telecoms local carriers.

As the pilot period drew to a close, various government officials declared it had been a resounding success, thus supporting its continuation and wider roll out. However, what is now coming to light from the Auditor General’s Office suggest that there might not have been any formal or cogent system to monitor and evaluate the programme, especially by the public authority that has instrumental in financing it.

The predicament in which the USF in Jamaica may have found itself is not unique generally, and more so in the Caribbean. However, it does emphasise importance of monitoring and evaluation (M&E) – not just across public sector projects and activities, but also across private sector: from micro-businesses through to large corporates.

Not an afterthought

All too often, M&E is not seen as an integral component of a project or a programme. It thus tends to be a “quick and dirty” assessment that is undertaken when an initiative is drawing to a close. However, when this approach is employed, it may no longer be possible to track or assess certain critical parameters, or to generate adequate data to inform important decisions. Hence although there might still be parameters that can be measured, they might not be optimal or the most compelling ones, and the resulting exercise, essentially, tends to be just a means to an end.

To better address this matter, and make M&E more integral to the process, it is imperative that it is considered during the project/programme conceptualisation and development stages. Further, it must be adequately resourced and implemented as the initiative unfolds in order to be in a position to develop meaningful results.

Asking the right questions

It is also important to recognise that an M&E exercise is only as good as the questions being asked. It can be a rigorous tool if compelling questions must be answered. On the flipside, if the exercise has superficial questions at its core, its outcome might not be especially helpful.

Invariably, two key questions that M&E exercises seek to answer are regarding the effectiveness of a particular project/programme, and correspondingly, its impact, which both speak to outcomes and the overall usefulness of the project. With regard to implementation, matters of interest tend to include, use of resources, timeliness of deliverables, and effectiveness of the methodology and work plan followed.

In summary, M&E ought to be an integral part of any project or activity that is being undertaken. While some of a project’s successes or shortcomings might be readily evident, they may not be enough, or properly understood, to make important decision. In a business setting, and especially where an iterate model has been adopted for product (and even business) development, the ability to continually track and assess where the product, or business is at, at any point in time, is more critical. However, the tracking scheme must be considered from the outset and integrated into the business, in order to produce the best results.


Image credit:  jasleen_kaur (flickr)



  • Interesting article @Michele, here in T&T the success and continued viability of laptop giveaway program has been in question. The problem is as you stated, monitoring, not just the use of the laptops but how it affects and improves learning. Technology for technology sake has it’s benefits but without monitoring and evaluation, these programs will always be questionable. Trinidad and now Jamaica are the pioneers in the Caribbean, the other islands are looking on. The real question is’ are we setting a good example?

  • This is certainly common in the Caribbean and elsewhere. A larger issue for me is that the “tablet intervention” was not predicated on an identification of root causes of the performance discrepancies.
    It is a clear case of selecting the “transportation” before the “destination” is clearly defined. so it is entirely possible to evaluate what was done and find that it was actually done; but the performance issue not only remains, but is more firmly entrenched.

Comments are closed.