So I made a few changes from what’s in your
book, but not a lot. And as we go forward I think we can make it a bit interactive and
we’ll see if we stay on the time schedule. And so we’ll try for that. So if you have
questions as we go, we’ll try to address those. First is disclosures. And the other disclosure
I have is generally a conflict of interest. My sister is a speech pathologist in Sacramento.
My daughter-in-law is a speech pathologist in the Burbank school system, and my brother-in-law’s
mother was also a pathologist [laughter]. And so I don’t know if it’s density that I’m
here or what but anyway. Hopefully I don’t know it helps, but is an interesting tie-in. Just in terms of acknowledgments I want to
especially acknowledge the National Institute for Mental Health, where most of my funding
has come from over the years in implementation science and implementation research for me
was born out of my interest. My training is as a clinical and organizational psychologist.
And when I got out of grad school and went to do a postdoc in health services research,
I was trying to figure out a way to bring those two things together. And it kind of
blossomed into implementation science and implementation research where we could think
about how organizational and system issues impact clinical services, and process, and
outcomes. Also I want to acknowledge NIDA and Hendricks
Brown who has a methodology center funded NIDA on prevention and implementation methods.
Also Enola Proctor, one of our speakers also is PI of the Implementation Research Institute,
also supported by the NIMH. So enough about that. I’m going to talk a little bit about conceptual
frameworks. I think you got some of that yesterday. There’s lots of them. The challenge is making
sure you’re using something that really addresses your particular implementation question. I’m going to talk about a model that I developed
with some colleagues where we had this phased model over time that talks about multiple
system and organizational levels. I’m describe some studies both in terms of their design
for addressing implementation questions at different levels, but also addressing effectiveness
or efficacy while assessing implementation issues as well. I’m going to describe those
studies in different settings. Now most of the ones that I have the slides
for take place in the United States. If we have time, I also have some examples of studies
in under-resourced countries, where there some real challenges in terms of credentialing
of providers and how you field evidence-based interventions where you don’t have highly
trained practitioners. So that’s kind of where we’re going. When I think about implementation science
it’s a broad field. There’s lots of traditions that inform implementation science. My particular
take is from policy organizations and how that impacts it. But there’s lots of different
approaches that you can use when you’re thinking about this, all the way from you know, social
network theory, how interventions move among populations of providers or clinics, or how
organizations network. To much more top-down policy: how policy mandates impact what’s
being done in particular fields, especially if you’re working in school systems, how policies
can affect what can be done in schools and in special education. And some of my colleagues
that I work closely Aubyn Stahmer and Lauren Brookman-Frazee work very frequently with
school systems. They focus on implementation for autism spectrum disorder interventions
in school systems. I like to make a distinction between frameworks
and strategies. So a framework is really a proposed model of factors that you think are
likely to impact implementation and sustainment of evidence-based practice. It’s a theoretical
model. A good model should identify kind of the key factors that you may want to address
in a research application, or in improving care. And I will probably make this point a few
times that if you’re putting together a proposal and thinking about that it’s really important
to tie your framework in throughout your whole project. So that your theoretical model leads
to your approach, how you’re going to address it. But also to all of your measures and your
outcomes and implications for the field. So bringing that framework, it’s not just something
that, “hey we got our framework.” And I’ve seen a number of applications as a reviewer
for NIH that people say, “We’re using this framework,” but then the rest of the proposal
doesn’t tie in. It really needs to be cohesive. So you need to think carefully about your
frameworks as your thinking about implementation science. And in contrast, the strategy is really a
systematic process of the approach that you’re going to use to adopt or integrate an evidence-based
innovation into usual care. So Byron Powell and colleagues at Wash U did
a nice review of implementation strategies and identified what they called discrete,
multifaceted, and blended. Really getting more complex as you move down that continuum. So a discrete intervention, I worked with
some folks at the VA on implementation of an HIV clinical reminder. And that implementation
strategy was designed to increase HIV testing among at-risk veterans. And so the intervention
was a software program that would go out when a vet was coming into the VA, no matter what
service they were coming in on. It would go out, look at the medical record, identify
risk factors for hep C and HIV, and if it met a threshold, then the nurse or physician
would get electronic clinical reminder to invite the vet for testing. So there’s an
example of a very discrete strategy that’s targeted at point of care. Right, there are also multifaceted implementation
strategies. So you might have a clinical reminder, but also build leadership support to support
physicians and nurses in actually using those clinical reminders. Because there’s lots of
clinical reminders, so what do they attend to in that process. And then blended implementation strategies
are even more comprehensive in terms of you know building collaboration, building buy-in,
coming up with specific strategies, and really melding all of those together. So Byron and his colleagues really identified
these planning, education, financing, restructuring, quality management, and policy change as types
of strategies within those different levels of intensity or comprehensiveness of strategies. So I don’t want to belabor this, but it’s
I think important to think about you know your frameworks and then how your strategies
may fit into your frameworks too. Because you want a strategy that’s congruent with
your theoretical model of change. So if your theory says it’s all happening at the clinician
and provider level, then your strategy should address that level. So just some things to
think about as you’re planning your work. So you know, we have theoretical frameworks
and why do we want frameworks? Well if your goal is to have a simple tire swing on the
tree you may propose that, but then if you ask your engineer to design it you may get
something different from what you really wanted. So hopefully as I was saying, the framework
really guides your thought about what you’re looking for, how you’re going to get there,
and what the end product should be. And so it’s you know, we often think of frameworks
as static, but your framework should be able to also help guide your process and what you
attend to as your implementing evidence-based practice. So also coming out of the hotbed of implementation
science at Wash U, Rachel Tabak, and Ross Brownson, and David Chambers did a review
of implementation frameworks and models. They identified 61 different frameworks to review. So I think you probably were exposed to 1
or 2, or a few frameworks yesterday. Well there’s lots out there to select from and
think about what fits what you’re working on. They evaluated frameworks on construct
for flexibility, focus on dissemination versus implementation, and socio-ecological level
whether it was individual, community, system. But many frameworks span these different levels.
I think most focus on implementation. And they really, in this table outlined some
of the frameworks across some of these dimensions. So it’s a really useful resource and the source
should be in your notes if you want to look for that as well. So I highlighted just a few here, the consolidated
framework for implementation research developed by Laura Damschroder and her colleagues at
the VA. The ARC, availability, responsiveness, and continuity developed by Charles Glisson,
really to support his organizational change intervention. And then our EPIS or conceptual
model of evidence-based practice implementation. And I want to say at this point that with
your particular research questions, your framework becomes, selection becomes important depending
on where you’re going to submit your grants, where you’re going to go. What the accepted
or understood frameworks are. So I was working with some colleagues on a
hep C application for the VA. And we used my framework. Of course I want to use my own
framework, but it didn’t quite fit with the paradigm in the VA. The CFIR consolidated
framework for implementation research that came out of that model. The reviewers at the
VA were much more familiar with that framework. So in our revision, we changed frameworks
and modeled our methods to fit that conceptual model. So it’s not just how your framework fits,
but it’s also grant strategy and development strategy to make your work as competitive
as possible as you move forward. So I’m going to talk briefly about these and
in particular about their use of levels and phases. So we see in many frameworks these common
elements of multiple levels. Often implementation occurs in complex systems. Complex systems
means you need to decide what you’re going to attend to. So I worked on a project with Sheryl Kataoka
up at UCLA on implementation of cognitive behavioral intervention for trauma in the
schools. It was in middle schools in LA Unified School District, and what we were doing in
that developmental study was looking to see if we could use Katherine Klein’s implementation
model which was developed for the implementation of software technology in manufacturing plants
and bring that in. This idea of creating an implementation climate in the schools and
we combine that with the Institute for healthcare improvements collaborative breakthrough process
to try to see if that was an acceptable way to build buy in for implementation of CBITS
in the schools. So we had to think about you know, the school
district. So on our collaborative team on the project we had mental health director
for LA Unified School District. We also had provider organizations present, and their
directors. We had the researchers and developers of the CBITS intervention and also input from
the actual service providers who go and deliver in the schools. So we’re thinking across school
district, school principal buy in, community-based organizations, researchers, providers. So
all of those levels, for us, were important in this process. So that’s just one example of thinking about
how multiple levels. Because if the school district has a policy that’s at odds with
what you’re trying to do, very difficult to get change to occur regardless of whether
you have a charismatic principal at one school, or your know, in one district a district superintendent
that’s supportive. Or you may have district superintendents that are not supportive, or
principals that don’t buy-in. So all of those levels are important in the implementation
process. It’s important identify concerns at those different levels. And I also think about implementation in terms
of phases over time. And different models have different names, but essentially there
are phases related to identifying problems, thinking about how you’re going to address
those problems, actually implementing change, and then sustaining change. So there may be
issues in those different phases that you attend to more or less as you go through the
implementation process. Or your implementation project may focus on just getting people ready
to implement, or it may focus just on actual implementation. So the phased approach kind of helps me think
about what I want to do and what do I want to address in the implementation process.
And at the, towards the end of the talk I have a model of how we’ve been trying to create
a process for doing that. So this idea of levels of change as I was
mentioned to you, it’s not new, Steve Shortell and Ewan Ferlie talked about this in regard
to quality improvement in health care. And identified, you know, some issues, you know
the larger system environment. Reimbursement, legal and regulatory policies and the organization
structure and strategy. I think in mental health and social services we also see organizational,
culture, and climate in leadership is important at this level. But also at the group and team level. How
teams and workgroups work together. This has really been shown Amy Edmondson at Harvard
Business School has done some really interesting studies looking at implementation of minimally
invasive cardiac procedure in surgical teams. And what she showed is where you had a strong
physician leader who created a climate of psychological safety, and support, and problem
solving among the team. They were able to successfully implement minimally invasive
procedures and sustain them. Whereas, in teams where that was not the case,
where you had adversarial relationships, where people were afraid to speak up actually went
back to more invasive open-heart surgery, incision, open the chest, and do the heart
surgery. So when you think about that in terms of the impact of these implementation factors
on patient care and patient process it’s a really critical application of implementation
science. And at the individual level I think this speaks
more to providers, knowledge, skills, and expertise. But also to those receiving services.
So we see also advocacy groups especially around ASD that are very important and sometimes
shaping policy and shaping views of what’s acceptable in terms of practice. So these
multiple levels can be really critical. And then in terms of the phases, I talked
about that. So I’m not going to belabor it. So I mentioned the consolidated framework
for implementation research. The CFIR. In your handouts there are more models. I can
talk about those if you want, but I’m just going to talk about these three for right
now. And CFIR domains are:
considering the characteristics of the intervention What is called the outer setting, or kind
of policy, or larger system setting the inner setting within the organization
Characteristics of the individuals involved and that can be the providers, clients, and
also management. People who were involved in the implementation.
And then the process of implementation. So this came out of the VA, like I said and
is a pretty widely accepted model. What Laura did in her study and this was early, she published
this in 2009, so it was 2007, 2008 they were doing review of the existing models at that
time. And in their article they have a nice table that kind of lays out different frameworks,
similar to what are Rachel Tabak and her colleagues did. But it lays out the different elements
of different frameworks. And then, kind of pulls them together to the CFIR. In Charles Glisson’s approach it’s nice because
he’s got stages of collaboration, participation, and innovation across phases that we think
of as implementation phases. Problem identification, direction setting, implementation, and stabilization
— which you can think of as sustainment, or sustainability. But then the components that he works in on
in his ARC organizational intervention, are things like leadership, and relationships,
and networks within organizations and across organizations, teambuilding, developing information
and assessment, feedback processes, participatory decision-making. Conflict management, which I think is an important
one. Some of our recent qualitative work has looked at the process of implementation over
a large service system, and what we’ve seen is initial kind of buy in and collaboration,
and then as things get going peoples, different stakeholders own agendas, and interests, and
needs kind of come to the fore and then negotiation occurs. And in a successful implementation,
those are resolved and coalesced, but you kind of have to be prepared. I’m going to
talk about this a little bit with the problem-solving orientation. The problem-solving approach
in implementation. It’s not like you’re going to set it up and we’re just going to go, and
there’s not going to be problems. All those things have to be worked on in negotiating. So goal setting, continuous improvement approach,
and job redesign, and self-regulation are the components of Charles’ model. And I’m
going to talk about outcomes of one study of his model in a little bit. In the EPIS and this is the framework that
developed with the Mike Hurlburt and Sarah Horwitz under the implementation methods research
group. John Lansford was PI of that center. We identified these four phases. Exploration,
preparation, implementation, and sustainment and that within each of these phases, there
are certain outer context or system issues to consider, and inner context, or within
organization and population issues to consider as you move forward. So it enumerates common
and unique factors across levels. And in the model there’s a lot of stuff. So with this model what we tried to do was,
based on review of the literature, identify most of the things that we could find that
are important based on existing literature in these different phases and across levels.
But this is not meant to imply that you need to address every single thing. The idea is
to think about for my particular intervention that I want to implement, in a particular
setting, across these phases of exploration, thinking about what we’re going to implement,
preparation, identifying the barriers and facilitators, and addressing those. Actual
implementation where we’re beginning training, doing fidelity assessment, those sorts of
things, and in sustainment how do we keep this going after the grant funding runs out.
What are the things we want to attend to across those phases for our particular study? So we did with Doug Novins’ led systematic
review of dissemination and implementation studies in a children’s mental health, and
for children’s mental health disorders. And in that review we identified these factors
based on the studies that ended up in the systematic review. So you can see it’s winnowed
down. And this is across studies. If we were to pick any one study and say what were the
factors that were looked at in any one study, this list would be winnowed down even a little
further as you go. So when you’re thinking about these frameworks
it’s really important to think about your context, and your issues. So in the EPIS framework, you know, I think
about these transition points. So in the exploration phase you’re thinking about the fit of the
evidence-based practice. Is the system ready? Are organizations ready? Providers, are they
ready to go? What do we need to do to make this happen? Both in the outer and inner context. In the preparation phase, once the adoption
decision is made, then you enter the preparation phase. And then, you know, we’re marketing
to stakeholders. We’re building collaboration. We’re really addressing the outer and inner
context issues that we’ve identified. And then once we’re ready. I should say when
we feel ready, which we’re almost never really ready, you know the training, the coaching,
whatever model intervention implementation begins. And then it’s a matter of, you know,
alignment of outer context support and problem solving outer and inner context issues. Once the evidence-based practice or intervention
is being delivered, with fidelity or the level of quality that we’re comfortable with, then
we’re really moving into that sustainment phase where we sustain this over the long
run. And I keep putting this problem-solving orientation and I’d like another bar that
says you know, in this phase should be thinking about sustainment. So in exploration you should
already be thinking about sustainment. It’s not just necessarily a demonstration of efficacy
or effectiveness, and we may not be doing an intervention for sustainment. But we may
want to be attending to factors in our research design across the way that are going to help
us sustain an intervention once we get through our trial. So we’ll switch gears. You know often in implementation
studies, especially hybrid studies that pair effectiveness and implementation issues we
want to use mixed methods. There’s a lot of interest in mixed methods and they’re part
of many training programs in implementation science as well, including the Training Institutes
on Dissemination Implementation Research and Health, sponsored by the NIH and also the
Implementation Research Institute training grant. So I like to use mixed methods, and you know,
it’s interesting because when I came out of grad school, when I was doing my post doc,
I was quant jock, I had done you know five graduate statistics courses and I remember
doing structural equation models by hand in graduate school. And I thought that was a
badge of honor, but. So it took me a while to come around, but it wasn’t till I was really
faced with understanding implementation challenges, kind of from the ground up. You know we can
look at quantitative measures, but do they tell us the real story. So integrating mixed methods. And I was very
fortunate to work with Larry Palinkas who’s up at USC now, medical anthropologist, and
thinking about designing these studies and addressing some of the issues that we deal
with in implementation studies where your unit of analysis may shift from a patient
or from a client to a team or an organization. And suddenly your power to detect effects
kind of dries up, because your unit of analysis really changes. So it’s really important in thinking about
design. You know, if our quantitative design isn’t as strong as we would like how can we
strengthen it with the qualitative component. And what more can we learn and how can the
two methods inform one another. You can answer confirmatory exploratory questions and verify
and generate theory at the same time. So in the study I’m going to talk about in
child welfare system in Oklahoma, we did annual meetings where all of our stakeholders, including
agency directors, and folks who were directing services in the community would all come to
San Diego every year. And we would review our quantitative and qualitative data that
we had, and do preliminary analyses, and we would talk about what that means, and what
does it mean for the next phase of the study, and how are we going to integrate that. So
we really were truly trying to mix methods, not just a quantitative component and a qualitative
component and never the twain shall meet. But really bringing those together. So I’m mindful of the time. And how much time
do I have Leslie?>>You have like 20 minutes.>>Oh, 20 minutes, awesome. You may not think
that’s awesome. But I do. So I want to talk about this mixed method
study of statewide EBP implementation. So this is and implementation of SafeCare in
Oklahoma’s statewide children services system. SafeCare is a child neglect intervention.
In child welfare 65 to 75% of cases are because of child neglect. And while abuse, physical
abuse and sexual abuse really hit the headlines, there are more child deaths from neglect than
other forms of child maltreatment. And it’s as I said, it’s just highly prevalent. It’s
the leading reason for families to become involved with the child welfare system. And
you know, in addition to harm to children in terms of you know, physical harm, emotional
harm, there’s also development delay associated with it, poor functioning, poor school functioning.
Lots of other outcomes in childhood and adolescence that are a function of a child neglect. The study was mix methods of quantitative
and qualitative. So we were doing quantitative assessments of the organizations, trying to
look at things like leadership at the team level, culture, and climate of the teams,
and how that impacted providers. And out qualitative methods involved focus groups with providers
and interviews with agency directors, area directors and the child welfare system folks.
Because we were interested in this statewide implementation of all the factors that might
impact how it’s implemented and sustained. So it was longitudinal at the organization
and team level. We actually with a no-cost extension, followed
these teams for 6 years, 12 waves of data collection, with a 95% or better response
rate across all waves. So it really required ongoing collaboration and working in the trenches,
and being able to say to the organizations in the service system we have something to
give back and to share as we go forward and we’ll do this process together. And I think
that was really helpful in the engagement process in the statewide system. So I built this on top of Mark Chaffin at
University of Oklahoma had an effectiveness trial of SafeCare and they had assigned teams.
And they had to do assignment of teams versus just randomization because of differences
in regions. So randomization. Everyone thinks randomization
is the gold standard. Yes, no? I don’t know. Reviewers often do, and policymakers often
do. But where you have six regions — randomization is based on the law of large numbers. If you
had 6000 regions I’d say yes randomize. If you have six, you’re going to get inequities.
So there was experimentally controlled assignment. The red is the teams that were implementing
SafeCare, and the green were usual care, regular home visitations. This is all home visitation
services, where the home visitor would go once a week, meet with the family and work
with them. So and services as usual was fairly unstructured. Go in, try to assess what problems
there are, and address those problems. Whereas with SafeCare there’s really a focus on home
health, home safety, and parent-child interactions and improving the communication interactions
between the parent and child or the parent and infant. So in the study, there was the SafeCare condition
and then teams within SafeCare or services usual were randomized to coaching or no coaching.
So this is the fidelity monitoring, fidelity assessment piece. A coach would go out with
the home visitor in the home, observe them, and then coach them on the model. Or you know
kind of demonstrate different activities and things like that. So I came in and actually Mark and Kathy Simms
had come out to our center to talk about the study that they were doing. And in our discussions
we thought about wouldn’t this be a great opportunity to look at implementation issues. And so this is something Leslie and I were
talking about earlier. You can be sometimes opportunistic in terms of thinking about implementation
studies. If there, for example, is a policy change, or if a state decides to implement
something broadly, can you build an implementation study on that? Can you work with the policymakers
or agencies to bring in a particular practice that’s going to improve care? And then can
you build a study on top of that? So I proposed this R01 in which we wanted
to look at workforce issues. The impact of implementing on job autonomy and work attitudes.
People’s perceptions of their work. Because they were used to just being able to go to
the home, do their assessment, do whatever. And now you’re imposing an evidence-based
intervention. Very structured, manualized, and not only that, but for half the teams,
someone was going out to watch them and observe them doing their work. So we thought it, you
know, reduced job autonomy, probably reduced work attitudes, and increase staff turnover. We also wanted to look at things like clinical
process, like what happens with working alliance of the relationship between the provider and
their clients when you’re doing a more structured intervention. Organizational factors. Like I said, leadership,
culture, climate, and structure at the team level, and then look at. So that’s the clinical
process piece. And then the organizational process is the organizational factors and
how it impacts the individual provider’s own adaptability, flexibility, their attitudes
towards evidence-based practice. The fidelity with which they’re working on the model, and
then ultimately outcomes. And we’re actually trying to put a model together
to look at a lot of this stuff together, but we had looked at some of those outcomes separately. So the first quantitative piece we published
was looking at the effect of the evidence-based practice and coaching on staff retention.
And we had hypothesized that in the SafeCare condition with coaching, you have reduced
job autonomy, and you would have higher turnover rates among those teams. And we found exactly
the opposite. This is a survival analysis. This is the SafeCare with coaching team. Essentially
what we found over the course of 60 months was that higher probability of retention.
And you can see the annualized turnover by condition. And SafeCare with coaching was
14.9%, and the other conditions, 33, 37, 41%. So in this kind of setting, the workforce
is critical when you think about what it takes to implement, to train providers, get them
up to fidelity on an intervention, and then have them deliver the intervention where you
have high rates of turnover. It’s a very tough issue. And it’s an issue that’s faced in almost
every setting that I know of, whether it be with you know physicians providing telemedicine.
Whether it be you know, bachelor’s level home visitors in this case, whether it be master’s
level speech therapists, whatever. People move around, so the more we can keep trained
professionals in the field, the better. But we also wanted to know qualitatively what
does this mean a little bit. So does the low rate of turnover signify satisfaction with
SafeCare? And the answers from our qualitative analysis was yes some providers love the structure,
many providers felt that there was some value to the EBP, and felt it benefit of their families,
but some providers disliked having to implement some of the evidence-based practice modules.
Some of them were harder than others. If you’re doing home health and your client is a nurse,
and you have to go through those steps, that makes it uncomfortable interacting with your
client. And some providers felt that SafeCare detracted from dealing with more immediate
issues like the crises that tend to occur with these families. So what we try to do use our mixed methods
to contextualize our quantitative findings here. Early in the study, we actually used
quantitative measures of whether providers liked SafeCare or didn’t like SafeCare. Kind
of their attitudes toward SafeCare once they had a little bit of experience to do maximum
variation sampling for our qualitative interviews. So used a quantitative to inform our qualitative
sampling approach. And we used the qualitative to inform our quantitative data collection. So we also looked at just the focus groups
and worked with the providers to understand the factors associated with SafeCare implementation
for them. And you know the things that were critical were acceptability of SafeCare to
the caseworker and to the family, appropriate to the needs of the family, the caseworker’s
own motivations. So sometimes you find a shift newer caseworkers who were coming out of school,
or coming into an internship may have liked the structure for example, that it provides
in delivering casework. Their experiences with being trained. So having a brief more
focal training where they can get out in the field and practice rather than a long protracted
training period. And the extent of organizational support. So at the team level do their organizations
and leaders really support the practice and support them in delivery of the practice.
And the impact of SafeCare on the outcomes and the process of their case management. So part of what SafeCare involves is checking
off skills that the parents are learning. So it gives them a metric for seeing, are
the parents getting better at interacting with their child? Are they getting better
at providing a safe environment, a healthy environment for their child? So those aspects
of the intervention might be something you’d want to look at in a study. Can we change
the way we provide feedback to providers or to organizations around an intervention so
that they understand their clients are improving, rather than just going on a sense that things
are improving? From our interviews with managers and executive
directors, the factors that were important for them were availability of resources, both
in their contracts with the state, but also extra resources for the families. You know
switch covers for outlets in the home, doorknob covers, locks for cupboards where there may
be toxic chemicals and things like that. And actually, you know, one of the interesting
things I learned is that you know, it’s not so much toxic chemicals that are implicated
in poisoning for kids, it’s things like strawberry flavored shampoo, or you know these kinds
of things we may not consider putting up for kids, but those are really the more common
sources of poisoning for kids. That’s an aside. The managers and directors, their positive
external relations both with the agencies, how they compete against one another, but
how they also collaborate when need be in the system. The support of the top agency
leadership for evidence-based practice. So having real buy-in and support from the top
leadership that kind of filters down through the organization. Creating high motivation
and low resistance in staff. So really trying to sell it to their providers, and tangible
benefits for the staff themselves. You know, I’m learning a skill that makes me a better
professional, I can be more effective with my families also, and we’ve seen this too,
going back to the turnover issue sometimes when folks get credentialed it makes them
more marketable. So that’s also a challenge. And the perceived benefits of implementing
the evidence-based practice outweigh the costs. So you can see, you know a difference between
when we look at the provider level, what’s important there, and the managers and exec
director levels. And that has implications for how you might design an intervention to
actually improve the context for implementation of evidence-based practice. Okay so you know we also wanted to look at
the issue of leadership, which is one of the areas that I’m interested in. And so we looked
at the impact of transformational or charismatic leadership. And this really is comprised of
things like you know individualized consideration. Do leaders — team leaders now we’re talking
about for each of those 21 teams we saw — do team leaders pay attention to the individual
needs of their staff. Can they intellectually stimulate their staff? Can they get them excited?
Can they motivate them? Can they be inspirational in that process? And what we found on the left in yellow are
the teams implementing SafeCare. On the right was the services as usual, where you had strong
transformational leader during implementation that created a more positive team climate
for acceptance of innovation, and more positive attitudes towards adopting evidence-based
practice. So you one of the areas that I’m working now
as we’ve developed also with developmental funding from NIMH, a leadership and organizational
change intervention that we want to test more broadly to see if we can bolster this kind
of leadership to create a positive climate and get better implementation and fidelity.
Take it beyond just providers’ attitudes, but see if we can get better fidelity and
outcomes where we have those multiple levels of support across an organization and team. So this is the hybrid part of the study. So
this is Mark Chaffin’s effectiveness data they published in Pediatrics. And he showed
that recidivism, or re-reports for abuse and neglect were reduced for the population overall
in the services. But for those indicated cases those families where neglect was really the
primary issue, an even stronger effect. So it’s nice, we’ve been looking at implementation
issues, but as I said, we built it on top of an ongoing effectiveness trial, so we’re
able to look at both, and now we’re able to combine data sets to look across the entire
intervention. So that brings me more to hybrid designs in
general. And I don’t know did somebody mention hybrid designs yesterday? Yeah, okay so Geoff
Curran and his colleagues talked about these hybrids type 1, type 2, type 3. I think what
we’re seeing mostly is hybrid type 1s coming in where people are looking — you can correct
me if I’m wrong David — but I think we’re seeing mostly hybrid type 1 studies, and some
type 2s. You know where they’re looking at primarily effectiveness, but also looking
at implementation factors along the way, more observational. But also some type 2s where
you’re looking at both. Manipulating an intervention, also testing out an implementation strategy. So the study I want to talk about is one of
the Tom Patterson and I have. And Tom is an HIV prevention researcher. And he developed
an intervention to reduce HIV transmission from sex workers. Developed this and did his efficacy trials
in Tijuana and Ciudad Juarez. And found good efficacy for the intervention. And his intervention
is a cognitive behavioral intervention where they train sex workers to negotiate safer
sex with their client. So in these settings in Mexico, the understanding is that you’re
not going to be able to eradicate it, but can we do a harm reduction approach. So we partnered with Mexfam, which is a large
community based organization in Mexico and 13 sites of their women’s reproductive health
clinics training their health workers to do outreach to sex workers now. Bring them in
for testing, they’d also train them in this model with the goal of reducing STIs, and
in particular HIV. So those are the sites we’re actually just wrapping up data collection
in the last 3 sites. And the study design, this is I think the
best way I could illustrate it, I think you do have this one. So I tried to really highlight
what were the effectiveness trial methods and what were the implementation research
methods. So for implementation, we’re kind of observing
the implementation process and using a train the trainer model. So our Mexican physician
goes to each site and trains a person at that site in the intervention. They then train
staff in the agency, but that’s at all sites. So we’re looking at the process, but we’re
not actually comparing that to a different implementation strategy. The research methods are to do quantitative
measures, and qualitative interviews, and focus groups at each clinic just prior to
implementation, recruiting the sex workers, and then randomly assigning — this is the
effectiveness trial — randomly assigning the sex workers to Segura the healthy woman
intervention, or a standard HIV counseling session. And then the implementation strategy as I
said, is train the trainer. The HIV prevention strategy is the Mujer Segura cognitive behavioral
intervention. And then at the end of each site’s recruitment and the sex workers then
follow up over time. Then we go back and do our implementation methods again at the end
of the study. So you know what we’re looking at in terms
of the implementation framework is pulling again from a multilevel framework looking
at organizational factors, the culture, climate, leadership, organizational support, and social
influence at each clinic and their relationship to Mexfam Central and Mexico City. Also looking at provider characteristics such
as job satisfaction, organizational commitment, their turnover intentions, and turnover demographics.
Their experience, their attitudes towards evidence-based practice. Their personal innovativeness,
and the impacts of that on intervention fidelity, and counselor competency, and then outcomes.
So we’ll be looking at that both with quantitative and qualitative measures as we get our data
in. Okay so that’s that study. I’ve got 5? Okay so I want to talk briefly about some
other models in question’s cascading models which typically address scale up issues. So
how do you take local expertise and move that into a community. So you may have different hypotheses in this
type of study. You may be interested in equivalence rather than difference. When you have the
highly trained providers, can they train another organization in the community, can they train
yet another, and can you get equal levels of fidelity. So now the hypothesis shifts
from a hypothesis difference testing to equivalence. And a nice example of this is Patty Chamberlain
and Joe Price’s cascading dissemination of a foster parent intervention. This is multidimensional
treatment foster care for kids with behavior problems in foster care. This was developed
up at the Oregon Social Learning Center. Efficacy tested in 3 counties in Oregon. In phase 2, the original developers trained
and supervised intervention, this in San Diego. And in phase 3, those interventionists trained
another cohort in San Diego. So the idea is can you roll this out in a
community and get good levels of fidelity. The answer is yes if you do it right. Baseline rates of behavior problems didn’t
differ for phase 2 and phase 3. So you didn’t see worse problem. So essentially the intervention,
even though it rolled out away from the developer was not ineffective. No differences between rates of trial problems
and treatment termination. Assignment was associated with significant
decrease in child problems for baseline overall. So overall, the intervention was working. And no decrement in treatment effect when
the intervention developers pulled back and had staff trained locally. Another one that we’re in process right now
is a study here in San Diego County called interagency collaborative teams to scale up
evidence-based practice. And this again is a multilevel were the academic partners who,
is myself and my colleagues and folks from OUHSC and SafeCare work with San Diego County
Child Welfare, the United Way as a funder of services. So The United Way in this situation decided
rather than giving $5,000 or $15,000 grants to print pamphlets and put those in pediatrician’s
offices. We want to make wholesale system change. We want to try to impact an entire
system. So United Way stepped up and provided training dollars for a few years to support
building a seed team in the community. So the intervention developers from Georgia State
University trained the seed team to become certified trainers and coaches and then the
seed team trains successive SafeCare teams across the county in the rollout. And we’re just looking at initial fidelity
data using a really interesting, a latent variable model that Mark Chaffin is doing,
and showing just minor decrements in fidelity, but not significant decrements in fidelity
over time as it rolls out. The other unique feature of this study is
the interagency nature. These teams are made up of providers from multiple organizations
who work together on these treatment teams. So the idea is to spread the expertise across
the contracting organizations for which they work. And we used a metaphor analogy of a computed,
distributed computing system in our proposal. So trying to think about that innovation criteria
in our proposal. So what does this mean when we distribute expertise throughout a system?
Can we do it effectively? So I showed you Charles’ model just quick
outcomes. You know Charles Glisson’s intervention is really interesting because it focuses on
improving the culture and climate of human service organizations. And as far as I know
he’s one of the only people to demonstrate that if you improve worker satisfaction, organizational
climate, and culture, you can get improved clinical outcomes, even if you don’t change
the clinical intervention. So in his previous studies he did correlational
studies then kind of proof of concept. And then this one, this is an implementation of
multi-systemic therapy in a 2 x 2 design crossed with his ARC organizational intervention.
And he was able to show you know significant reduction in out of home placements in behavior
problems for ARC and multi-systemic therapy separately. So you know, both were having
an effect, but having an effect on reduced child behavior problems in the ARC counties
where MST was present. So really interesting study were you’re looking at those organizational
issues and looking at the clinical issues and clinical intervention at the same time. So the last study I want to talk about is
just thinking about adaptation and get I’m getting the watch. So I just want to say this is an integration
of the EPIS framework with a process to really utilize it. So in the exploration phase, and
this was funded by the CDC, what we developed was a system-level assessment, organization
assessment, provider, and client to understand characteristics in the exploration phase.
And it’s not just adapting an evidence-based practice, but it’s saying what do we adapt
in organizations, what do we need to adapt in terms of the service system to help support
this in the long run thinking about sustainment. And then convening what we call an implementation
resource team that involves academic researchers, intervention developers, trainers, and coaches,
administrator, clinicians, peer leaders through the preparation phase. Once we start implementing then we build in
some outcome assessment that we can feed back in the system. So in this case we use Web-enabled
tablets with online system. So at the end of every session the home visitor hands the
tablet to the client and it’s coded in the tablet, which module they’re working on, which
session and they complete a very quick like 1-minute fidelity checklist. And that data
comes back to our central system and we can feed that back both to the implementation
resource team and to the ongoing coaches to provide feedback to the providers. So what
we’re trying to do is to kind of create a system that works across these phases to really
help support effective implementation. So I think that’s a lot to think about. I
could talk about you know other studies that I had ready in under resourced countries and
task shifting studies, and a really interesting study in Nigeria using churches as a service
delivery setting. Echezona E Ezeanolue from University of Nevada is doing that study around
reduction of maternal to child HIV transmission. So there’s lots of interesting studies that
I think can inform how we think about applying your particular efficacy and effectiveness
questions to an implementation framework. So I will just stop there. Thanks. [ Applause ]

Gregory Aarons: Practical Application of Frameworks and Strategies for EBP Implementation
Tagged on:                                             

Leave a Reply

Your email address will not be published. Required fields are marked *