MAGIC is governed by a SC, which is composed of the five principal applicants and a co-applicant (authors of this paper). The SC is responsible for planning, decision-making, and resource allocation in consultation with the larger MAGIC team. The SC also identifies circumstances requiring remedial action to manage risk, such as delays in deliverables or conflicts of interest. The SC ensures that all MAGIC members and knowledge users involved in queries conform to the policy for disclosing conflicts of interest. This issue is critically important, given the focus on drug safety and effectiveness and the need for researchers to be at arm’s length from industry.
Research approach
Knowledge users (including Health Canada, provincial partners, CADTH) can submit queries regarding drug safety and effectiveness to the DSEN Coordinating Office using a standardized template. A DSEN Scientific Advisory Committee that includes representation from the CIHR, Health Canada, CADTH, CIHI, and the collaborating centres discusses query feasibility. Each collaborating centre conducts a feasibility assessment from their team’s methodologic perspective; for example, some queries may best be answered by observational studies and others through a systematic review and NMA, a prospective observational study, or a combination of methods. The DSEN Coordinating Office then asks the collaborating centres to present their feasibility assessments at a monthly Scientific Advisory Committee meeting with the query requestor (i.e., knowledge user) in attendance. After this meeting, the requestor considers which of the approaches they feel would best meet their needs. The CIHR Coordinating Office then informs the relevant collaborating centres. The coordination of this process via the CIHR Coordinating Office typically takes 1–3 months.
MAGIC uses a five-phase process to answer queries (
Figure 1), which has been iteratively developed over the past 10 years. The cross-cutting principles to our approach focus on ensuring knowledge users’ needs for rigorous knowledge syntheses are met in a timely fashion, building capacity and mentorship, demonstrating research excellence, and using an integrated KT approach. In Phase 1, the DSEN coordinating office informs MAGIC about the decision from the query requestor; MAGIC immediately asks to meet with the requestor and a teleconference is organized. In Phase 2, MAGIC meets with the requestor to clarify their needs to inform the query (outlining the patient, interventions, comparators, outcomes, concept, and context). During this meeting, query restrictions are outlined (such as age, language) and suitable methods for knowledge user engagement are identified as well as desired format of final deliverables (e.g., report, slide deck). In Phase 3, a work plan is created using the template specified by the DSEN Coordinating Office; it is sent to the knowledge user and the Coordinating Office for Review. The work plan includes the methods, budget, KT strategy, project update frequency, and timelines. MAGIC offers to present and discuss the work plan with the knowledge user. MAGIC conducts the research during Phase 4 and provides regular updates via the DSEN Coordinating Centre. Preliminary reports with the knowledge users are provided and knowledge exchange is offered via a webinar to discuss results. Tailored end-of-grant KT strategies are implemented in Phase 5. For example, MAGIC incorporates all feedback from the knowledge users, creates a final report, offers a webinar to present results and submits a publication to an open access, peer-reviewed journal. Knowledge users are invited co-authors on all publications. Publications are shared with the DSEN coordinating office. A one-page research brief is posted on the MAGIC website and disseminated via social media (e.g., Twitter, LinkedIn).
Queries may be declined for a variety of reasons. For example, a knowledge synthesis may not be feasible for the query as there may not be any primary studies on the topic (e.g., such as for a new drug that has not yet been tested in a randomized trial or no post-marketing surveillance data are yet available). If a recent, high-quality knowledge synthesis has already been completed, there is no need for another synthesis, thereby avoiding duplication of effort and research waste. Another common reason for declining a query is that the question is best answered by another study design such as a prospective observational study.
Once a query is approved, MAGIC uses a standardized nine-step method for knowledge synthesis as outlined in
Figure 2. We use standardized methods to reduce redundancy, promote independent reproducibility, increase efficiency, and ensure best practices in knowledge synthesis conduct such as methods in the
Cochrane Handbook for Systematic Reviews (
Higgins et al. 2021), JBI for Scoping Reviews (
Peters et al. 2020), and
JBI Methods Guide for Synthesizing Qualitative Evidence (
Aromataris 2020). All steps are done in an iterative fashion with input from the knowledge users throughout. In Step 1, we match the query to the relevant knowledge synthesis method, using a freely available tool we created (
KTP 2019). Question formulation and refinement occur in Step 2 and sex and gender are considered in all questions as appropriate. The protocol and work plan are developed in Step 3. The protocol follows relevant reporting guidelines (e.g., PRISMA-P (
Shamseer et al. 2015)) and with permission from the knowledge user, a one-page protocol brief is posted on the MAGIC website. The protocol is peer-reviewed and registered (e.g., with
PROSPERO (2021) for systematic reviews and Open Science Framework (
OSF 2021) for other reviews). The literature search is conducted in Step 4, by experienced librarians using multiple electronic databases and grey literature (
CADTH 2011). Online software (e.g.,
DistillerSR (2021), Synthesi.SR (
Synthesi 2014)) is used to manage and standardize screening, including titles and abstracts and full text (Step 5). Data abstraction occurs in Step 6 and we assess the evidence quality in Step 7 using
GRADEpro (2021) software and relevant risk of bias (
Sterne et al. 2019) tools. In Step 8, we complete the synthesis, which includes a descriptive summary of the studies. If more than one included study reports an outcome specified in the research query, and where appropriate and feasible, we conduct a meta-analysis. NMA is considered in partnership with knowledge users when there are multiple interventions and the knowledge users want to make inferences regarding their relative effectiveness. Reporting the knowledge synthesis is Step 9 and involves use of relevant reporting guidelines from the EQUATOR Network to ensure transparency.