Daniel Warner, Ph.D. Founder and President, Community Data Roundtable
In order to effectively implement system change, stakeholders must be able to see and measure the impact their initiatives are having. Community Data Roundtable works with communities to develop dashboards that appropriately show the progress the community is making in their initiatives. Below is an example of two dashboards we have developed in our efforts to improve the referral and utilization to the evidence-based treatment, Functional Family Therapy (FFT).
1) FFT Prescription/Approval Concurrence
“Concurrence Graphs” (below) measure two things. First, they simply show how often a service is being “prescribed” (aka, having a referral made to it) by those professionals who are using the CANS-based DataPool to evaluate and prescribe services. The goal, then, of this dashboard is to have an upwards slope – the more and more the FFT service is being prescribed, the line moved higher. The prescribers are noted below in the blue line, and as we can see, this line is getting higher as time moves on (we should note that in the system of care below, the algorithms went live in July, which corresponds with the increase in referrals to this service).
A Concurrence Graph, however, measures more than just how often a service is being prescribed. It is also measuring how often a service is being approved by the payer (in this case, a behavioral health MCO). The red line measures how often, in each month, the FFT service is being approved. This is an important measure, because in today’s market driven behavioral health system it is the concurrence between providers and payers that actually leads to system change – no one entity can work alone and affect change. It is common to hear providers complain that payers are “always” denying their appropriate referrals to a program, and likewise it is common to hear payers complain that providers “never” recommend clients to this or that service that doesn’t match what they do. A Concurrence Graph measures the role each party is playing in referring to an identified service, and thus clarifies for the system stakeholders the pressures involved in moving the system to improved care.
In this graph below, it is interesting to note that before July 2014, there was more dissonance than concurrence between payers and providers on FFT. This is seen by the lack of alignment between the blue and red lines. However, once the CDR FFT algorithms went into effect, we see a complete concurrence emerge between the payer and prescribers about referral appropriateness, while simultaneously there is a rise in the amount of referrals the service is receiving. Regular review of this dashboard helps the community ensure they are on point with their initiatives to improve the utilization of an evidence-based program in their area.
1) EBP-to-Match Ratio
Another important dashboard CDR makes for communities trying to increase the utilization of their evidence-based programs, is an EBP-to-Match Ratio dashboard. While a Concurrence Graph measures how well providers and payers are collaborating to support the community’s identified EBPs, it doesn’t tell you how well the providers are doing in following the Evidence Based Service Match algorithms that are supposed to assist providers in matching children to appropriate care.
In an EBP-to-Match Ratio graph, each stacked column represents a different month. The total height of the column represents the total number of children that matched for the EBP during an initial evaluation (we’ll discuss why we reduce to this sub-group below). The blue represents those people who matched, but weren’t referred to the service, while the red represents those who were referred to the service. The goal of this graph when viewed as a dashboard, is to have the columns become redder and redder over time, as clinicians increasingly adopt the logic of the algorithm into their clinical decision making processes. As can be seen below, the columns are getting increasingly red over time – but the providers in this system of care still have a while to go in matching clients who are receiving an initial evaluation to the treatments designed for their needs (i.e. FFT).
We focus on initial evaluations because it is here that we have the least controversial argument to make for linking someone to an algorithmically identified service. There is much research and understanding about the impact of “continued fails” in the system. Struggling through the system is traumatic and demoralizing. As families struggle, system fatigue sets in, and they are less open to the insights and impacts of even good and appropriate clinical interventions. The goal of the human services system should be to link people to the right service, at the right amount, at the right time – and thus we focus on these children the most.
Another helpful analysis that this graph leads to (tho not shown below), is doing a “deep dive” into those children who were not referred the service, and seeing what they were referred instead. This allows us to understand the appropriatness of deviating from the algorithms, and which treatments as usual are serving children that could otherwise be treated with evidence-based care. Then, over time, it is possible to compare the outcomes of these two groups – to see the clinical impact of the evidence-based programs on those children who receive the service, compared to similarly matched children, who are not receiving the service.
Dashboards such as these are reviewed at CDR’s regular Stakeholder Roundtables. Their implications are discussed, and strategy is adopted in order to help move the system towards the benefits of all parties.