Sharing evidence to inform the future of health care delivery and complex care: Lessons from the Camden Coalition and J-PAL North America partnership
In this second post of a two-part blog series on the Camden Core Model intervention, Aaron, Amy and Kathleen reflect on their key learnings from the evaluation and how study results will inform the future of the Camden Coalition’s work and the broader field of health care delivery and complex care. The first blog post explored the impetus behind the study and how a strong partnership between the researchers and practitioners paved the way for a successful evaluation.
What did you learn from this experience from actually conducting the evaluation now that it's at its close?
Aaron: We knew participating in the study was going to put a burden on our front-line staff. We learned a lot about how to support staff wellbeing and how to build a culture that puts a high emphasis on self and group reflection to help avoid burn-out.
Kathleen: Our hypothesis was that a short-term care management intervention would reduce readmission. Along the way, it became clearer to us that a short-term intervention focused on connecting patients back to the health system wasn’t going to be enough. Unravelling lifetimes, and sometimes generations, of social need and whole-scale community disinvestment takes much longer to do.
Amy: I’ve learned that the value of a good partner and a good partner relationship cannot be overstated. All kinds of unexpected things come up during an evaluation, and having a partner that is dedicated to the same mission and is clear on their goals and priorities is critical. It’s not just that they’re incredibly good at what they do. They’re also great to work with.
How will results from this evaluation inform the future of the Camden Core Model and what future areas of research are you exploring?
Kathleen: This is an organization that, from the beginning, was upfront about the fact that we wanted to adapt and change along the way. We understood that this was not just about medical complexity, but about the intersection of medical and social complexity. We are continuing to make those connections both in our care management work and our work with partners.
Aaron: We sought out the answer to one specific question very rigorously within this randomized evaluation, but there’s a lot of corresponding questions in the back of our minds as we have more time to work on it. Care management’s value proposition shouldn’t hang in the balance of just one outcome measure, so there’s a lot more work to be done studying additional measures that will give us a broader view of programmatic success. Given the heterogeneity of study participants, we also want to look more closely at the impact of intervention dosage and potential differential treatment effects across patient subgroups.
Kathleen: The fortuitous thing is that the study results are being published at a time when health systems are talking about social determinants in a way that is different than when the study started in 2014. We are spending a lot of time thinking about the connection, or lack of connection, to those social determinants. That is why it is critical that we continue to evaluate and learn.
How do you think this evaluation will contribute to the field of health care delivery and complex care?
Kathleen: The evaluation provides more information for the field as we think about program design, what’s needed to make these interventions possible, which populations could benefit most, and which populations need more than just a care management intervention. The evaluation informs what we've known for a long time—that health care alone can't fix these issues. We see these results as showing that we really need to work harder to break down the silos between the services our patients are getting and what they actually need to become healthier.
Amy: The main thing I hope this evaluation will contribute to the field of health care delivery in general, and the field of complex care in particular, is that we need hundreds of more Camden Coalitions—partners who are willing to be active learning organizations and work with us to embed rigorous evaluation into their ongoing standard business models. This study emphasizes the pitfalls that can occur with observational studies, particularly when you’re dealing with super-utilizers who are typically at the peak of their crisis when enrolled in the intervention, leading to a natural regression to the mean. Hopefully this will inspire other organizations to develop the data infrastructure needed to think about doing rigorous evaluation and consider partnering with J-PAL or other like-minded researchers.
Kathleen: For many patients of the Camden Coalition, Hospital utilization is only unnecessary and avoidable when there’s a substitute for those stays. If we want to reduce hospital use because it’s more expensive and not necessarily the care the patient’s needs, then we need greater investment in community-based alternatives that are effective and evidence-based. For example, almost two years into the randomized evaluation, we implemented a Housing First program because it was clear to us that we would never medically stabilize certain patients until their housing was stable.
Aaron: We successfully targeted and enrolled people with some of the most extreme utilization patterns and complex needs. The individuals we serve have accumulated a lifetime of complexity from personal adverse life experiences to dealing with all our society’s structural inequalities. While we observed some regression to the mean, this population continues to have high hospital use and we must acknowledge the difficulties inherent in changing these trajectories.
What advice would you give to other organizations considering an randomized evaluation?
Kathleen: Participating in an randomized evaluation is hard work, especially for a small organization. But we are extremely proud of our work, the partnership with J-PAL, and what the evaluation results are contributing to the field. We would have liked to have seen other results, but we are also not afraid of what we’ve learned. We are using the results to push ourselves, and we hope, to inform the field of complex care more broadly.
Aaron: It’s really critical that every organization that’s implementing a randomized evaluation, while they might not have a team of data scientists or analysts, have some degree of capacity to engage in quality improvement work.
Researchers from J-PAL’s network partnered with the Camden Coalition of Health Care Providers (the Camden Coalition) on a rigorous evaluation of their Camden Core Model. The Camden Core Model has received national attention as a promising super-utilizer intervention over the past few years. We sat down with Kathleen Noonan and Aaron Truchil, the Camden Coalition’s CEO and Director of Strategy and Analytics respectively, and Amy Finkelstein, Co-Scientific Director of J-PAL North America and the Principal Investigator of the study, to go behind the scenes of this research partnership and share their thoughts on what they have learned through the process.
Researchers from J-PAL’s network partnered with the Camden Coalition of Health Care Providers (the Camden Coalition) on a rigorous evaluation of their Camden Core Model. The Camden Core Model has received national attention as a promising super-utilizer intervention over the past few years. We sat down with Kathleen Noonan and Aaron Truchil, the Camden Coalition’s CEO and Director of Strategy and Analytics respectively, and Amy Finkelstein, Co-Scientific Director of J-PAL North America and the Principal Investigator of the study, to go behind the scenes of this research partnership and share their thoughts on what they have learned through the process.
In this first blog of our two-part series on the Camden Core Model evaluation, we explore the impetus behind the study and how a strong partnership between the researchers and practitioners paved the way for a successful evaluation. In part two of the series, we reflect on key learnings from the evaluation and how study results will inform the future of Camden’s work and the broader field of health care delivery and complex care.
Why was the Camden Coalition's leadership team interested in rigorous evaluation of the Camden Core Model?
Kathleen: The Camden Coalition is an organization that is dedicated to inquiry and field-building. We really welcomed the opportunity to incorporate a rigorous evaluation of our model. We felt that conducting a randomized evaluation of our program was a way to add to the growing body of knowledge in the emerging field of complex care.
Aaron: Our work has been focused on breaking down the silos between care systems. Fragmentation across health and social service providers seemed to be what was getting in the way of improved outcomes for our patients, and we knew we weren’t the only ones struggling with this question. Even though our program was still evolving, our health systems and community partners were willing to test our theory of change alongside of us, so it was a “right time” opportunity.
Kathleen: We are a coalition of hospitals, community-based organizations, and Camden residents. It's part of our identity to be a learning organization. Without our partners supporting the organization and the idea of evaluation, we wouldn't have been able to do this work.
Amy, why were you interested in working with the Camden Coalition on this evaluation?
Amy: As an economist, I don't have particular insight into how to design care management or transition-to-care programs to better serve the super-utilizer population. But when I see organizations that have an idea about how to achieve this, I’m very interested in partnering with them to rigorously evaluate and learn from what they're doing.
However, not many organizations have the vision and courage to rigorously evaluate their program and apply those learnings to improve going forward. The Camden Coalition was a dream partner because they were willing to engage in rigorous evaluation, they were interested in looking at the data, and they were comfortable taking educated guesses so we could quickly scope out whether a partnership was feasible.
How did the research partnership between the Camden Coalition and J-PAL North America come about?
Aaron: We were first connected through a mutual colleague who played matchmaker, seeing the potential synergy between researchers looking for potential high impact randomized evaluations and an organization motivated to test its model. After a few phone calls with the J-PAL team, it was clear to our team that this was a great opportunity to collaborate with a high-quality research team and publish findings that were very important to the field. We had a stable funding window for the Camden Core Model and knew we were going to be running the intervention long enough. After gaining buy-in from our partners and staff, we started working with J-PAL.
Amy: The Camden Coalition was very clear and direct about what was important to them, and we were very clear and direct about what was important to us. They are a learning organization and are constantly modifying and learning from the field. They were very clear that they didn’t want to commit to not innovating for the duration of the randomized evaluation. This worked because they kept us well informed of any changes they made to the program design along the way. The clarity of purpose, goals, and what was and wasn't negotiable made the partnership very easy.
After you decided to move forward, how did the collaboration continue to be successful?
Aaron: We spent a tremendous amount of time in that early phase on planning and implementation. Amy and others from the team were on site and going on home visits, speaking with a wide variety of our staff to get a sense of what the model looks like and what challenges our patients faced. They were also very helpful in figuring out some of the necessary logistical components, including modifying our various databases to accommodate the randomized evaluation needs. It was really a collaborative process that relied on our team to keep a close eye on day-to-day things and J-PAL staff to be responsive and work closely with us to troubleshoot.
Amy: The traditional J-PAL model emphasizes that if you’re not going to be on site every day, you need to hire staff members who can be there to make sure the protocols are being followed. But it quickly became clear that we didn’t have to follow this model because Aaron and the Camden Coalition staff were so on the ball about communicating with us from the ground.
What was exciting, as well as challenging, about partnering on this evaluation?
Kathleen: Most nonprofits of this size don't have the capacity or resources to be able to do this type of study. We were already an organization that was interested in learning, and committed to the use of data, but this process made us more sophisticated and built up our capacity even further to be able to think about these types of research questions and how to adapt and grow from the results.
Aaron: Distilling our intervention to just one specific metric was a tension that we continually confront. The primary metric in the study was the hospital readmission rates of our participants six months after discharge, which is important in the conversation around health care and reducing overall health care costs. However, we see health holistically, so we knew that we were going to be missing some other critical parts of the story about the impact our intervention has on people's lives and health care. It was a tradeoff we had to make to study this one measure that was important to the field.
Amy: The time it took to recruit the 800 study participants was longer than we had jointly expected, which was a challenge in terms of staffing, funding, and the delay of study results. However, it gave us an excuse to have a continued, long-term engagement with the Camden Coalition. During this time, the Camden Coalition introduced J-PAL researchers to prospective partners that were also data-driven and interested in rigorous evaluation. The Camden Coalition explained how a partnership with us could be a valuable and mutually beneficial experience, giving credibility to us as researchers.