Welcome and thank you for standing by at this time. All participants are in a listen-only mode During the question and answer session. You may press star 1 today’s conference is being recorded If you have any objections you may disconnect at this time I’ll turn the meeting over to your host for today’s conference – Margaret Farrell, you may begin. Oh hi, and thank you. Good afternoon, everyone I’m Margaret Farrell and on behalf of National Cancer Institute I’d like to welcome everyone to the October NCI webinar on advanced topics for implementation science research We’re most pleased this month to feature. Dr. Russ Glasgow Russ grad joined the University of Colorado School of Medicine in September as associate director of the school’s Colorado health outcomes research program and as the visiting professor in the Department of Family Medicine He was most recently here than see is deputy director of implementation science in the division of cancer control and population sciences Dr. Glasgow is recognized nationally and internationally as a pioneer in the field of dissemination and implementation research and practice providing practical research frameworks and intervention models for the field in areas with such leadership has been absent Reference its PhD and MS degrees in clinical psychology from the University of Oregon in Eugene He is more than 30 years experience in academia and has been the recipient of key awards and honors in his field including the Society of behavioral medicines distinguished scientist Awards and the American Diabetes Association’s behavioral medicine and psychology council lectureship for distinguished contributions this afternoon Russell discussed profuse the pragmatic Explanatory continuum indicator summary the ten domains that affect the degree to which a trial is pragmatic or explanatory this session will also examine opportunities for how it can advance this area of research through implementation science and We’re most fortunate to be joined by dr. Gill Neither of the implementation science team here at NCI who will service the respondents this afternoon Before we turn the session over to Reston gala I wanted to share a couple of ajusco items for the webinar breast This presentation will last approximately 30 minutes and then we’ll have plenty of time for a vibrant Q&A discussion During the second half of the hour So we welcome your participation and invite you to and So in order to make that easy for everyone with two ways, you can submit your questions You can either press star 1 to be placed in the queue to ask a question live Or you can use the Q&A tab at the top of your screen to type and submit your questions Just type it into that box and hit ask So thank you very much for joining us today And we look forward to this important and informative topic and with that Russell turns it over to you to start us off. Thank you Thanks so much Margaret and good afternoon everybody. I guess maybe morning to those of you on the on the west coast It’s great to be with you today and Kind of fun and both interesting to be on the other side I’m used to sitting where we’re Dealer usually does is kind of trying to facilitate This but it should be fun and among our other goals for today Will be collectively because I also don’t know that Either if we get on any of the actual developers the consort folks who develop the prey sees criteria maybe they can help tell us what the correct pronunciation is because I Pronounce it prey sees your heard Margaret pronounce two different and I probably heard about eight other definitions but what what the acronym stands for is probably more important and that Is a pragmatic Explanatory Continuum that word will be important later continuum indicator summary So that’s that’s what precedes is, but I’m real excited because I am very Jazzed about the use of this tool I think it can help a whole lot as we’ll get into in advancing our work in Pragmatic research so with no further ado Let’s just go ahead and get started with some Colorado colors there We welcomed you with here’s what I’m going to try to do The first three in about ten minutes each just to give you background on the need for this Try and give you some concrete examples of applications or the Tracy’s criteria and It doesn’t quite say it here, but I found Tracy’s to be useful for planning trials sometimes even as kind of a mid-course check or Adjustment for reporting both individual trials and also for doing literature reviews So I’ll be interested in what you think when we get done But I also want to be sure that we have plenty of time I know there’s at least a few people on the call that have used Tracy’s before so I welcome your your experiences and those of you That haven’t used it before but might be interested give you plenty of time to see if I’ve confused you About it or what other questions you want to have as well so For beginners going way back to kind of dictionary type definition the notion of a pragmatic trial as Contrasted with either an explanatory trial which is a term. That seems to be used more In Canada or by the priciest developers in the US. I think more if you would might want to replace Explanatory with efficacy or more more basic highly controlled trial but the notion is pragmatic is something that Is works in real-world situations. I do want to emphasize two things At the middle and bottom of the slide there one thing pragmatic does not at all mean being less rigorous it generally does mean a stronger focus on external liddie, but does not mean being less rigorous and Secondly, this is not a better or worse any trial in any of the various dimensions that we’re going to be talking about today Are somewhere along a continuum there go the name the continuum indicator summary and it’s not Necessarily a good or a bad thing to be more pragmatic or more explanatory It kind of depends on the question what your goals are and things like status of the literature but Since this is used to report on pragmatic trials. Let’s just go over a few things to have a common Starting point and this is kind of a synthesis from my perspective of key issues of pragmatic trials and those of you that are familiar with comparative effectiveness research will also note that That pretty much all of these things also apply to comparative effectiveness research I’m not going to go over everything on here but just just to highlight a couple of a couple of points the first one being that the notion is that these issues should be ones that are central to and that are at least Co Equally shared by stakeholders Those your partners and operational settings that you’re working with rather than it being the researcher that comes up with the idea and then goes and kind of Begs and pleads and settings to say something like well Can you just give us 50 50 patients and we’ll leave you alone like that? But this should be more a notion that something that is Is a priority for the settings? most of the things are Rest of the points on this slide. I think are straightforward except I do want to emphasize one of the issues on both pragmatic trials and in CER is that comparison conditions that are used in pragmatic trials should each be real-world alternatives something that could be applied or pragmatic So instead of using a no treatment control or a placebo condition or something like that like we might do and more traditional research or efficacy studies That the all of the comparison condition should be real-world alternatives Next slide it shows the heart we’re getting into the heart of the prey sees criteria here and You can see the idea here and then kind of the the classic article by Thorpe at all there That laid this out in 2009 is referenced at the bottom is that there are ten different domains That kind of cumulatively or together Reflect the degree to which a trial is either Pragmatic is more pragmatic or more explanatory and the idea is on each of these ten dimensions You can see to what extent of trial where our trial falls along That continuum so these are just general features of research design including eligibility Criteria again we talked about what the comparison condition would be they’re on point for a lot of it has to do with the intensity of monitoring and focusing on issues that You can see some of the terms they’re talked about compliance and adherence on the part of participants and and practitioners or delivery staff so Let’s just take a couple examples here We don’t have time to go over all of them But I’ll be glad to answer questions here and again in the references that are provided to you There’s a lot of great detail, but this just shows in terms of a couple of attributes there Let’s take an easy one in the middle there. You can read the other one, but the participant eligibility Criteria in a traditional efficacy or explanatory trial you’d be concerned about Ruling out if you will potential confounders And so let’s say you’re doing a heart disease study But you might want to do then would be to rule out say cancer patients or those that had arthritis or other Conditions because that could kind of introduce a lot of noise or messiness And so generally what you’d want to do is to identify a real homogeneous group And in particular often we screen out Comorbidities and you want to screen in people that are that are somewhat motivated because the goal is to do assessment under optimal conditions Contrast that with the pragmatic trial where again, you probably don’t want to take quite all comers, for example If there are patients that aren’t expected to live more than a few weeks or say had extreme dementia You probably wouldn’t want them in the trial But in general you try to be as inclusive as you can think we’ve got one more slide on other Examples here some of the the other notions In terms of attribute. Let’s just take the top one there on the Practitioner expertise and you might note Tracie’s has both the level of expertise and experience of both the Delivery agent the staff in both the intervention in the comparison condition but often in an explanatory or an efficacy trial What you want to do is you want the world’s experts because you want this under optimal conditions So you want people highly trained highly experienced often highly supervised doing this so you can get the best results with a lot of monitoring a lot of feedback on delivering a Pragmatic trial would be much closer the way this would be in the real world. So there would be an emphasis on diversity across there in a wide range of Staff in in both conditions. So here’s the really cool thing about the prey sees criteria Is it has this summary? Figure that I call a spoken– hub or a spoken– wheel if you think about it and here you can see two examples that are kind of straw person examples, but the notion is The ten dimensions that we’ve just talked about are displayed going around the different Points or on the different lines that are seen here and then how Explanatory versus pragmatic a study is is indicated by where you might put a dot along these dimensions So let’s look at the right-hand side where you see the explanatory Study or again a more efficacy study if you had a dot on here and let’s just take the one at the top we’re just talking about practitioner expertise in the experimental condition if you had a very Efficacy type study and you really just had experts your dot would be maybe where it is on the right hand side very close to the centroid or the center point where you see the e4 explanatory or efficacy in contrast if you jump over the same dimension now jump over to the left hand side the pragmatic study you see this is an extremely Pragmatic study on the left and your dot for this dimension would be all the way out to the to the extreme Usually these ratings are done on a five-point scale either zero to four or one to five But that’s that somewhat arbitrary but that’s done using a defined set of definitions or criteria but again, you can see here a prototypic explanatory study and a prototypic pragmatic study wish I could see your hands because Sometimes I find people have a hard time understanding this. Maybe I know if it’s all visual people are but bottom line is After you kind of understand and play with this a little bit This is I find a really great tool for sharing and reporting results because you can just glance at this figure and you know a great deal about a study just from looking at This little figure without taking pages and pages to describe the study or design So let’s take a couple of real-world examples how this has been used then After reading about it. My first experience was working with colleagues on a study called the power Intervention trials for weight loss that was funded by the National Heart Lung and Blood Institute and Again, I don’t think the details are that important here. But the the notion is that these there were actually three Separate they were each RCTs that were funded to focus on primary care based weight-loss interventions That were done it was an open competition I believe it was under a you mechanism But they ended up funding three different trials take each of which were generally towards the real-world end or pragmatic or effectiveness in but they did have differences because they were investigator responses rather than being a you know, a standardized a Standardized contract they did share. However, and you’ll see this on the praises criteria. Oh Sorry, I just messed I think I’m back thank you if you saved me Margaret Back to the slide here. Let’s take a look at the next slide and But this one is across the bottom Although it’s too small to see the details But that’s that somewhat on purpose here because what I’d like you just to focus on are the things that I’ve kind of shaded What’s what’s shown here at the bottom are these Tracy’s criteria or the hub-and-spoke diagram for the three different studies? and on here But you can do is you can see those similarities and differences and and one similarity I’d like to point out is kind of towards the bottom of the of the figures there you notice that there were similarities on issues such as the primary outcome which makes sense because that was Common across there and also areas in which there are some differences as you see things Towards the top that have to do typically with the flexibility that was done in the intervention. So we ended up a Feeling that this was fairly useful exercise And we found that a number of people with the modest amount of training Could reliably code These these criteria and as you can see there was variability both Dimensions with some of the dimensions being more pragmatic others less. So and also across the studies It did turn out that there were significant differences across the studies So in terms of kind of a summary What I think we learned and I think we have at least one or two people on here who are involved in this too I’ll welcome their comments if they have different take-home lessons than I did there but but these were some of the my conclusions here is that First of all that this was useful to be useful to report on after a study was designed the initial my Understanding the initial intent was that tracy’s would be used to design a trial for a study But we found that it found it was useful at reporting out in a relatively transparent way here – I think the other Issues probably aren’t central for today but if some of you are interested in the power trial or the actual results Those are listed at the bottom and the paper that we published along with colleagues there is the first list of that at the top so The issue came up from this experience. We liked braces, but actually and when we got our hands wrapped around to doing it Our thought was that this was a great start But that not all of the dimensions particularly dimensions that stakeholders might be interested in like what does it cost? What are the resources required were covered by the existing ten? Tracey’s dimensions and Also, we were interested in external validity issues You might remember that there was an issue about inclusion criteria of participants But there wasn’t anything in the original Pacey’s criteria on Either their representativeness. It was the inclusion that or at the setting level there weren’t Criteria there. So what we did? In addition to coding the priests criterias originally proposed And published we came up with some additional criteria focused largely on external validity following some work that the public health Larry green and myself had done over the years there and we also Demonstrated in this study that these dimensions as well as the Tracie’s criteria could be reliably coded after a short period of time To not be too mysterious here Here’s some Colorado Aspen for those of you that are getting bored with the with the slides But we added about I think are there eight here eight or nine other? criteria In addition to the Tracey’s ones that we felt were useful that again focused largely on contextual factors you can see these but contextual factors as well as like did the study reported all on Sustainability or cost that that you can see there? A second application. I’m going to switch now with that building on that that first experience a number of us this is when I was at NCI and a project led by mike sanchez who I believe is on the on the call with us today to we we tried to see well how well would Proceed it seemed to work kind of for planning and reporting individual studies. But how would it work to summarize the literature? As a whole so we were doing this project looking at eHealth studies that were relevant to cancer and had a priori identified 117 studies that met our criteria for being a health Intervention studies and here’s kind of our findings there. The first one was somewhat to our surprise Particularly given what I just shared with you there was relatively little variability in these health studies on the pre C scores and in general Almost all the studies. They’ll kind of Midway along the praises Period I believe we were using a five-point scale from one to five here so you can see right around three We did also use some of the additional criteria Not exactly the same as what I showed you but but very very similar I think we used almost all of those might have changed one of them but the other finding was that these practical Feasibility we turn them this time or external validity scores that the studies did less were less pragmatic On those you can see a full point difference, which is a meaningful difference on the scale here on these more external validity Criteria there. We did find I don’t have chance to go into it But you I guess the paper is still coming out I get confused whether it’s interested or not But it should be out shortly and I refer you to that in Translational behavioral medicine, but we did find some kind of interesting Subgroup differences in differences across the criteria and how pragmatic they were Here’s a visual way of showing this Figure as I said, you can see here. Here’s the praises criteria and you can see again Averaging across these 117 studies how they were really pretty close to three or midway between being pragmatic and explanatory Again in contrast this slide shows the practical feasibility or the more contextual external validity scores notice and if I Can not lose it, but go back here. Oops. I went the wrong way Here’s the prey sees once you can see about three Here’s the visual display for the pragmatic dimensions using the same coding criteria. You can see how much Less pragmatic they were on those and again That’s just a nice illustration of flipping back and forth if they don’t make you dizzy how just in a nutshell You can tell a lot about using this summary figure after you have a little experience with it. Um The third project we use this explicitly for planning a Study known as the my own health project and kind of wrote up a paper That’s now under view led by Brigitte Giglio and a number of us that have had different experiences Using the praises criteria where we kind of summarized our results overall for today’s purposes what I’d like to highlight here is that we felt that the kind of take-home lessons for there is that the praises criteria were useful for diverse things that they could be reliably rated they did show and almost all studies reveal differences both across dimensions and across studies and again we Curve these now hold it for today. I’d like to argue that these criteria are helpful to planning studies evaluating progress and especially one of my current soapbox issues is enhancing the transparency of reporting results of what was done So here’s just a visual display of these three different snapshots I’ve given you you might notice where you have the pre ceased criteria across the bottom the scores the ranges from 1 2 all the way up to possibility of 5 before and you can see there were differences across the study, but also across dimensions there Now I’m going to talk about some work involved with colleagues Paul Esther Brooks and colleagues at Virginia Tech on a different review of physical activity group based pragmatic Interventions there, and then we’ll kind of summarize and and take your questions But they reviewed again drilling down much more Narrowly than did the Health Review, which was pretty broad. These were just on physical activity group interventions that focused on group dynamics They use the Tracy’s criteria Also found it useful, but they did identify Considerable variability across these these studies here again again As you can see they also were using they call them renamed external validity dimensions there They also found kind of replicating our earlier results that the external validity Issues were reported were less likely to be pragmatic or reported less often when were the traditional priestess criteria and recommended in particular That they recommended enhanced attention to issues on the representativeness kind of across the levels of participants intervention staff and Settings apologize for the typo and also on on costs there So to kind of wrap this up and then take your comments and end questions. I At least feel that the precedes is a very nice summary tool that really efficiently helps to summarize How explanatory or efficacy versus pragmatics a project is across a lot of different dimensions We also ended up feeling that again. The Pacey’s again was very useful. But at least for our purposes Didn’t quite from an implementation science perspective since this is an implementation science webinar Didn’t quite capture everything so we felt that it was useful to add in Some other criteria focused on contextual external validity Factors and again, I think my main take-home here is I just feel it’s a very nice efficient summary that can really help Transparency and help communication among stakeholders and and researchers like all research we have some caveats and I think one experience that we had with the eHealth shows that not all criteria apply to all content areas and the particular example I’m thinking of here is the rename criteria about practitioners that are delivering interventions well in the eHealth Studies we found some of the interventions that we were being rated were entirely automated so it didn’t make much sense to try and code the practitioner reliability or you know experience or on that dimension when in fact there really weren’t any human intervention staff the Tracy’s criteria were designed for I think more traditional Clinical trials another interesting point that I’ll just throw out as kind of a teaser I’ll be glad to elaborate more if you would like to later is that What was found is that? Researchers tend to rate their own projects as Moderately, but statistically significantly more pragmatic than they do others So if you’re just reading kind of protocols from studies Researchers tend to rate ones they’ve been involved with their own is more pragmatic than they do Descriptions of other protocols and again, I’ll be glad to describe either the methodology for that or some reasons You can probably guess that some but I want to conclude with the fact that Again, this is not a value or a good or bad statement but these are descriptive data and almost all studies vary along the Criteria my own summary is we we kind of look forward to think about how these can be used and in some potential is And I think these were congruent with that that the the Estabrooks the Virginia Tech group Found there too. Is that some key areas in need of attention and These are also congruent with a lot of other reviews that many of you On the phone or the webinar have been involved in – is that what seems to be? particularly infrequently reported are representativeness at different levels cost sustainability and also the last two issues of Adaptations or changes that are made during the course of a study and also unintended effects or outcomes My last slide before I open it up for you I think I do want to just call out a couple things about future directions and tell you that the Consort the original folks developing Tracy’s are actually starting a process to do a revision or an up Tracie’s if they call Tracy’s to I’m going to guess it’ll to be at least a year before they’re out But that’s something that you might want to look for I at least would like to propose and be glad to entertain Alternative views that the priests criteria we’ve discussed some experience in that they can be really useful all along The sequence from the very beginning planning studies to some midpoint evaluations to two summative type purposes as well Maybe a more provocative point now that I’m no longer on the federal site I guess I can save this maybe but it might be interesting if funders might think about using this as a criteria is a way to describe your study when you’re Reporting it, but also I think what an intriguing use might be to use Longitudinally in a given study over time to see how a study changes from its planning to its implementation To its final reporting and with that I think I will turn it back We do have I think plenty of time for discussion. So I’ll turn it back to Gila and Margaret Stay stressed and them there was an excellent starts to open this discussion, and we really do hope that others will You know take you up on your offer to either open up this discussion continue to unpack the issues around These different kinds of studies so again Just as a reminder we do welcome your questions and you can submit your question for us By dialing star 1 on your phone and give them live or you can type them in Under the Q&A tab at the top of your screen. So and with that I’ll turn this over to Gayle is start the discussion Thanks again rest that was a really informative and interesting talk on an important topic Which is having these criteria to evaluate how pragmatic or practical these trials really are and I was wondering if while people are seeing about their questions if you could talk a little bit about in terms of moving forward and having these criteria be used perhaps required by funders whether is it really realistic and With ten criteria and then these potential other ones in addition. Do you think this would be too many would people What would what would be a a? reasonable number of criteria to consider for a requirement Good good question and a good challenge and again always being kind of Trying to push push maybe too far too fast I’d propose not only the ten tracy’s criteria But some additional ones that we found in these studies and other people might want to comment on that too I noticed particularly brigitte Giglio and my conches were on that that have used This they may want to raise their hand But but I think that the key is that these can be scored very efficiently and reliably and it all comes down to this summary figure to me – is What is kind of where the magic is and what makes it feasible? I think if you had to take the room and reporting in addition to all the other things you have to report on for consort and with the journal page Limitations we have it probably wouldn’t be if you had to write out a full text explanation of all these things But I think that almost all studies you could report get in a simple little figure Even in those journals where you might not have the space increasingly with the availability of online appendices or something I really think that would be quite easy to Show that show the summary results here and in to me I just think it’s so important for implementation science question because I think these address a lot of the things that usually are left out of reports and that contains a lot of the information that both practitioners and other stakeholders Policymakers might have questions because because they want to know the answer to questions about like well, you know How similar were the used to my settings and things and some of these other issues like the cost? And things that you just often aren’t reported and there’s a whole lot of reasons for that but I think that the Magic is kind of in the figure and I also say that yet because I think it’s like like many issues Is that there’s not just one or two things? So I’d have a real hard time saying it’s just this one or two dimension is what you should report on it’s really this whole profile across the different dimensions I think is is where the real beauty and the use of this is Great. Thank you for that There’s a question from an Anna bellman She asks, can you talk a little more about pray sees as an evaluation tool? Do you see for example that precedes could be an interesting tool for picori applications considering their emphasis on stakeholders? Great question Thanks. Thanks, Anna. Yes, I do Think it could be really useful and again like we did in each of these other applications Some of the my recommendation would be to use both the standard Tracy’s criteria use the new ones, you know when they’re out Tracy’s too, but then also to think about your particular use and particularly for P quarry with the focus on stakeholder engagement It probably could easily involve an additional item or two there about and maybe even a separate dimension about degree of patient if you will or end-user consumer engagement, and then another one about the Delivery agent or the stakeholder that are involved there. So I personally think it could be, you know quite useful that way again I think in the same way that often and I promise to quit harping on this figure But that I think the like the general world that we say in grant applications often The figure is worth if not a thousand words at least quite a few words that just to get across What it is that you’re doing I think it could be quite useful both to reviewers but also to those of us writing grants to to kind of think through issues and think about well on this given to mention how pragmatic versus Explanatory. Do I really want a study to be Great thanks for that and I just brought up the graphic again to remind people how you know visually accessible this really is so a question from Jonathan Tobin Who? says great presentations and he asks have you examined whether there’s any Association between the strength of an effect or effect size and its variability and the summary of proceed criteria as applies As applied to an overview of the literature such as you have presented here Hey Jonathan a great great question only only anecdotal I can tell you on the power studies. But again we Haven’t looked to my knowledge Consistently although it seems like we should have so I’m going to I’m going to give you a quick answer on the earliest one the power studies that I mentioned and then hope that May be particularly Mike Sanchez or Brigitte Giglio who I know are on can help me out here because it seems like we should have looked at that in the Review and shame on us If we didn’t maybe we can do that and report back to you, but at least I’m blanking on it but what we did find in the in this isn’t too much of a surprise in the power study that the one Intervention that was rated as more pragmatic which again if you were well was more real-world Which was out there taking kind of more diverse settings more challenge real inner-city Settings that that sort of thing is in general the effect size on weight loss was less than the than the other studies now now a caveat I would give to that is a Course like many things in life. This is complex And the question might be effect size on what outcome So on the outcome of weight loss it was on the other one and we didn’t have standard reporting but other things Those of you that know me know that I have a big thing about reach are how what’s the broadest? Range of both populations and in terms of also another dimension of adoption settings. I Think that there would probably be a pretty strong correlation in the other direction there that the ones that are more pragmatic on the prey sees things would probably have higher reach or Be able to be used in more settings So again, I think it’s a question of the effect size on what but it’s a great one and let me just see here It’d be good anyway to see if either Mike or Bridget or other any let me open up more generally them in particular But anybody else that is used this because I’m not the you know Certainly the sole expert on this and I don’t think we have I was hoping we get some of our Canadian colleagues On here who actually have developed it or used that I know both in addition to Thorpe and Ornstein Sharon Straus has used it a lot, but I don’t see her on now So, let’s see if maybe Mike or Bridget have anything they want to add or Jonathan wants to follow up And particularly any of our colleagues whom Russ is calling out by name, please Feel free to join session by um by pressing star one and we’ll pull you in eula, okay, and in in the meantime while we’re waiting for people to Get processed. There’s another question from Laura dam Schroeder Who? Asks, whether you have guidelines for operationalizing the ratings, for example how to score along the various crises dimensions Especially in light of researchers tending to overrate or bias towards pragmatic end of each scale their own studies Hi Laura a great question Yes, we do For the original precice criteria there in that article, and I don’t know Gil I think you have control now if you’re Margaret could find the one the original Thorpe article does have explicit definitions there are probably somewhere else in addition that I That I’m not thinking of at the moment on some website. Maybe they’re even on the NCI website That somebody can help me with but there there are and it is important that there is a specific Concrete definition that has anchors for each of those and I’m pretty sure that it would have not been possible to get high reliability if we didn’t have those those concrete definitions that were Done there and again with a modest amount of training I believe on average We had like two or three group sessions per training where the first one was kind of didactic Explaining this and things then we coded in between a sample article or two Maybe a couple articles on this to see how much agreement or disagreement we had got back together and then Didn’t change the criteria but added some additional Specification so I think that’s that’s a really critical issue just a note and then again others might want to comment on this but I think that the reason at least one of the reasons and again who knows could be a whole lot of reasons, but we did in this one study have people Read both their own and other studies and then we had independent raters that weren’t involved in either One of them is how we came to that conclusion and showed that again It was a moderate to a modest effect, but it was statistically significant, but I make up particularly since I was one of the Raiders on our studies doing it is the reason why I think is that you just have so much more information about your study particularly in that case where we did it after the bat and That we kind of knew What it was all the things that you don’t see in a protocol like well how we had to tweak and adapt the recruitment? Criteria how maybe certain session didn’t work quite well So you did that and you just have more information so in a way, it’s not quite a fair comparison because you have it’s kind of like Impossible to sort out kind of like an instruction to the jury to disregard something. You have all this extraneous things so but I do think the methodological point is important that if if You’re using it as an investigator and comparing one of your studies to somebody else’s that you make sure you have balanced across that Great And I wanted I wanted to ask in terms of these criteria What what have been the trends in use of these criteria? And do you see? that they are being increasingly used and perhaps more importantly Not just by researchers, but are there any examples you can think of where decision-makers have actually taken into account? how studies were evaluated using these criteria, uh Two different questions, I don’t think I have a great answer or example to Each of them, but it would be intriguing to think about I hadn’t thought about Stakeholders being asked to you know to use the criteria But I can see that this could be useful in say community engagement exercises or things being done at CTSA programs around the country where when you’re considering a Choice between again, I think these might be particularly applicable when comparing two or more choices for programs or policies Where if you could show this dimension, I think they could could understand it I don’t know if I would see a lot of stakeholders or policy makers doing the actual ratings themselves But I think they could understand the summary result and it might be useful In that way in terms of the criteria being used the other way I don’t have a great example the closest thing that I can come up with Guila, is that on some? Projects. I know Robert Wood Johnson Foundation uses evaluable B criteria, which are separate but but somewhat related to this in the notion of they’re kind of For today’s purposes kind of looking about issues of Generalizability or how feasible is this to to? To do in the real world and they have used those criteria for funding Actually the funding criteria to my knowledge. And again, I’d love to hear if anybody else has some I I haven’t seen these exact criteria used at the funding stage your very first question I don’t know the answer to that but That’s one of the reasons I was so excited to be asked to do this today. And if you haven’t judged from my Tone of voice or whatever. This is one of my current soapboxes and so I’m hoping that through vehicles like this through some of the publications that We and others are doing on this that that will stimulate others to use it more broadly Thanks, I actually have a few more questions, but we should first open it up I believe them the operator may have some other people on hold for questions Yes, our next question is from Michael Sanchez your line is open Hi, I just wanted to respond to the comment earlier regarding effect size given where study may fall in terms of the pragmatics explain story criteria and it is a shame that we didn’t kind of take a look at the effect size given the Crazy scores, but maybe that’s something that we can do in the future. I do remember seeing a presentation where They looked at roomis rheumatoid arthritis regimens given on the guess the Intervention that was applied and how strictly those interventions were applied so in other words that a little bit more flexibility with the practitioner and they were divided into two separate groups and it seems that the ones that had a little bit more flexibility in terms of What we would consider our practitioner expertise or Compliance with the intervention regarding places they’d be did score better than the the ones that were more strictly pragmatic in explanatory So there were there were several different clinical guidelines based on that. I can’t remember they use places to indicate whether one intervention Regimen was more like pragmatic or explanatory, but will they’ll follow up on that Yeah, thanks Mike. And again, thanks Jonathan for suggesting that and Mike since you’re on here if you haven’t hung up already Is that the article that your review article that? Somebody pulled up for us here. Is that out yet. Do you know where is it? Still still impressed? It is out think you could publish back in June. Oh, my bad. Sorry second second. Typo I had on here You think you have another question from Miss Roberts her line is open Um, hi, this is losses on how are you? Hey sirs. I’m doing well. Looking forward to seeing you Yeah tomorrow I guess um On your last slide you mentioned that one of the things to consider About you and preci in the future would be unintended Consequences. Could you elaborate on that and and and especially? Consequences for whom or for what? Good point you nailed me down. It’s not something I thought through a lot But as you well know I think often we tend to be too Narrow-minded or focused in our focuses is too narrow on studies and we’ve seen a lot of examples Both with traditional medical care and policy issues where we have seen, you know Deleterious unintended consequences and by the way parenthetically, sometimes we might see unintended positive Generalization consequences do so. It isn’t it isn’t always negative I’ve not thought that through at the level of what would be the exact criteria But I guess I could see when you were reading a protocol or a reported study Anyway, it would just be you know, did you look for that at all? Is there any any report on that the fact that at least you know, you made an effort? I almost think because by definition If it’s unintended in a way, you didn’t think about it ahead of time. So so it’s probably going to tend to be a more qualitative measure I’m going to guess rather than something you could could you know, Apriori look for a specific quantitative measure But but I do think in terms of this thinking broadly and more generally in terms of thinking about systems issues and contextual factors that it’s it’s something we should pay additional attention to Absolutely Thanks Have another question From And I’m afraid I don’t know the name that acronym is linen This is a comment actually that oh This is Laura linen right I saying that correctly from UNC Chapel Hill and she has a comment that She’s done community-based participatory research studies where they’ve asked Them to rate the extent to which a study is participatory So she could easily see them being asked to rate the study on key factors like the crises constructs After doing private ratings a discussion about the results would be entirely consistent with a participatory nature and spirit of the work Thanks, Laura. Great point yeah You know I could see that – in fact that might even be an interesting exercise For some ctsas to take on would be different projects You know to rate and then to have actually it would also be an interesting comparative thing Just like we talked about researchers reading their own Projects is more pragmatic than independent sources might be interesting to get the community partners to rate our participatory was this and then look at the researchers and look for the you know, the Similarities and differences on that but good good. Thought I Just wanted to ask if from if the operator has anybody else on line, just please feel free to interrupt I do have One last question and that’s about some of these added constructs like cost and sustainability and whether the end and adaptability and how well these things how How good are the measures that we have for these things and how well can they be? incorporated into these kind of criteria Two different answers good good good challenge. Um the first answer is that I think it’s so important and so underreported and so critical that Even having just a minimal measure like was there anything on cost or resources reported? Was there anything on? Sustainability the program after it was over would be a huge contribution and could be very very easily rated Even if it was a dichotomous type notion So I think it’s so critically important that we need to do something on That and that would be quite easy to do as we move forward The greater challenge might be to have more nuanced Measures of these issues and things and I think that there are some models out there Although admittedly because this is new and underreported It’s going to take us a while and we’ll probably you know Have some rough edges and we’ll probably learn a little bit from applying a measure and it won’t work perfectly It might be you know, like the example I gave we’re using Tracy’s to score eHealth proposals when there’s not a practitioner there then it didn’t work. So well so I’m sure there’d be lessons learned and we We’d make some mistakes, but I don’t think that’s I think that’s always the case when you move into a new area And again, I don’t think that’s a reason not to Not not to do it a quick thing I know we’re getting towards the top of the hour, but on the cost of what I would say is that One of the things that I’ve learned is that it’s important to focus on the cost both as a study was done You know and on the implementation cost if you will, but also, I think it’s quite useful to report on replication cost Where you consider different scenarios that you might have if your situation is different and finally on sustainability I think the issue there Is and I wish we had more time but some of you may remember back to David chambers Group webinar with us last month on sustainability the issue there is sustainability of what is it? Sustainability of the effect size is a sustainability of the community partnership Is it sustainability of the exact intervention that you talked about and maybe you could rate? You know all of those without getting carried away unless that was your key focus And think for us that’s a great segue, you know to you know Invite people to continue this discussion online at research to reality either to this webinar, or dr Chambers webinar you can access the different archived webinars there We will be sending your feedback is important to us and an online evaluation Survey will be sent you an email in a few moments And we just want to once again. Thank Everyone for joining us to the webinar and hope you’ll be able to join us next month on Tuesday November 26 from 2:00 to 4:00 p.m. Eastern Time when we’ll welcome rust rounds, then we go from Russ Russ of Course suburbian and chris carpenter to discuss key needs from measurement image and issues of harmonization

PRECIS: How to Enhance Transparency and Reporting for Dissemination and Implementation
Tagged on:                                         

Leave a Reply

Your email address will not be published. Required fields are marked *