I was recently asked what the difference is between program evaluation and TCOM? My response was first of all, that TCOM is clearly not in opposition to program evaluation, and in fact outcomes monitoring is an important part of program evaluation. However, TCOM has a unique approach to evaluation problems, and also is larger than just program evaluation, and attending to all of these is totally the fun part!
Fundamentally, TCOM tools are meant to simplify communication across a system. They identify the fundamental actions that are being decided upon in a multiple-program system, and make it easier to appropriately act on them. Doing this already impacts your system, and thus when you use a tool to evaluate the program in such an environment, you have to be careful.
The science of program evaluation is premised on giving objective measures of program operations, and as such has historically set up processes to avoid “circularity.” A typical program evaluation model is to administer an abstract tool at the beginning and end of treatment, and then to measure the change. The tool is meant to not influence treatment, and instead to measure the client at such a level of abstraction that the assessment itself should NOT impact the scoring. As such, it is meant to accomplish the most objective measure of a program’s impact (i.e. “efficacy.”)
In contrast, TCOM tools are first and foremost clinical tools: structuring the assessment and referral process. The idea is that we regularly ensure that everyone is using the tool reliably, and thus we have a reliable measure – but yes, we are measuring something new. As one does TCOM, they are moving their program around in ways that make classical evaluation models difficult to use, because the program is not staying still. While as a scientist I agree this makes things difficult, it is also what is—and has always been—true about community-based programs. Community based care is messy and dynamic. While abstract measures of small parts of it are possible, and even valuable in many situations, they are not the only measures we have of our work that can show us if we’re doing a good job (i.e. being “effective”.) This is where communimetric science steps in.
When the communication is reliable, we have a new way to measure what’s going on. We can measure that people are communicating about new things, or in new ways – and these changes are the new source of our evaluation data. When I say the outcomes have improved in a program that is measured with TCOM, I’m saying that overall the needs of people in the program are reducing. This is not an abstract designation. It is measurable not only in terms of TCOM, but can even be translated into FTEs (less work to be done, means less spent on workers doing it.) Yes, it is a measure impacting what’s in that “black box” of a program– but it’s a solid measure that means something, so we should use it.
I’ll be writing more about the way TCOM interacts with the field of program evaluation, as well as quality monitoring and care management, in the year ahead.