Most of us working in prevention are challenged to think about the potential impact of our work. Preventing substance use and similar problem behaviors among youth is a serious and ethical responsibility. We need to be sure that we are delivering evidence-based interventions and policies to the right audience, in the right setting, and in the most effective way. While we recognize that the methodologies of monitoring and evaluation are important at every step in the implementation process, we see monitoring as crucial for giving us ‘real-time’ data that can inform how well we are doing as we progress toward our goals. It also tells us what might be needed to improve our reach and delivery in time to make changes to ensure better outcomes and sustainability.
In the graphic below, we see that Monitoring, or Process Evaluation operates on the program inputs and outputs and basically tracks what is happening in the program while it is being delivered. This graphic also shows that monitoring is a major part of any evaluation as it looks at the key ‘ingredients” of the delivery of an evidence-based prevention intervention.
Monitoring links the elements of implementation, including specifics like the audience, the implementer, and delivery methods in terms of fidelity to its original design. For evidence-based interventions and strategies to obtain the expected outcomes, it is important to maintain the content, structure, and delivery of the intervention as it was designed in the original research.
Monitoring is also just good program management. Monitoring keeps track of how much it costs to deliver the intervention in terms of staff time, materials, equipment, training, etc. It helps to demonstrate to funders or potential funders that the goals of your agency, coalition, or organization are being met. And if they aren’t being met, what can be done to improve the situation.
In brief, monitoring answers questions such as these:
Is the prevention intervention/policy being implemented as it was intended? Are all of the classes being delivered?
Is there participation on the part of the target population? Are your recruitment strategies working to achieve sufficient participation? Are you measuring how many classes/events individuals are attending?
What types of information or data is usually collected for monitoring the implementation of a prevention intervention? The most common include attendance, addressing questions such as:
Who participated in the intervention?
What were their characteristics?
How much of the intervention program did they receive?
You may also want an assessment of the intervention both from the participants as well as the instructor:
Was the instructor credible?
Was the information clearly presented?
Will you be able to use the information in the future?
Were the facilities adequate for the intervention?
Were you able to cover all of the material?
Were the participants attentive?
Finally, after the program is delivered, monitoring is useful to determine whether the intervention achieved the intended short-term outcomes.
Did the participants learn the skills of the intervention?
Did the participants gain the knowledge needed to make changes?
Were the participants more informed regarding the normative nature of substance use?
Did the participants change their intentions regarding substance use?
Of course, the full range of evaluation involves the multiple outcomes we strive to reach with all our prevention work. We plan to look at the other initial—that is, Exploring the “E” in M & E--in an upcoming Prevention Nugget. Also, APSI offers training in our professional training series, Foundations of Prevention Science and Practice: Course 8 – Building a Monitoring and Evaluation System for Evidence-Based Prevention Interventions and Policies.