Background
Publication bias is “the tendency on the part of investigators, reviewers, and editors to submit or accept manuscripts for publication based on the direction or strength of the study findings” (
Dickersin 1990). Studies with statistically significant or positive results are more likely to be published than those with statistically non-significant or negative results—this means the dissemination of research findings is a biased process (
Stern and Simes 1997;
Dubben and Beck-Bornholdt 2005;
Song et al. 2010;
Dwan et al. 2013). Similarly, studies with positive results were much more likely to be published in a shorter time than studies with indefinite conclusions (
Stern and Simes 1997). Publication bias threatens the practice of evidence-based medicine, the validity of meta-analyses, and the reproducibility of a study (
Marks‐Anglin and Chen 2020). Under-reporting due to publication bias exaggerates the benefits of treatments and underestimates their harms (
McGauran et al. 2010). Ultimately, this is detrimental to the healthcare system, as it wastes resources and puts patients at risk (
Moher 1993;
Chalmers et al. 2013). Between 1999 and 2007, fewer than half of all trials registered and completed on ClinicalTrials.gov were published (
Ross et al. 2009). Between 2010 and 2012, fewer than half of all the registered trials for rare diseases were published within four years of their completion (
Rees et al. 2019).
In 2008, the World Medical Association's (WMA) Declaration of Helsinki stated that “every clinical trial must be registered in a publicly accessible database before recruitment of the first subject” (
WMA 2021). The declaration also states that all studies should be published, regardless of the statistical significance of their outcome. Nonetheless, publication bias remains prevalent globally. For instance, in all 36 German university medical centers between 2009 and 2013, only 39% of clinical trials were published within 2 years of their completion (
Wieschowski et al. 2019). In 2015, the World Health Organization (WHO) published a Statement on Public Disclosure of Clinical Trial Results. It states that the main findings of clinical trials are to be published at the latest 24 months after study completion and that “the key outcomes are to be made publicly available within 12 months of study completion by posting to the results section of the primary clinical trial registry” (
WHO 2022). This WHO statement serves as a global guideline to reduce publication bias and improve overall evidence-based medical decision-making.
In 2020, the Canadian Institutes of Health Research (CIHR) asserted that, as of 2021, it would implement new policies to ensure full adherence to the WHO Joint Statement requirements (
Government of Canada CI of HR 2020). Similarly, the Canadian Government recommends adherence to the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS 2), which states that all clinical trials shall be registered before recruitment of the first trial participant and that researchers shall promptly update the study registry with the location of the findings (
TCPS 2018). The Canadian Government also encourages sponsors to register clinical trials in a publicly accessible registry such as ClinicalTrials.gov (
Canada 2003). Despite these recommendations, explicit guidance for trial reporting is lacking (
Cobey et al. 2017). Even in jurisdictions where the legal frameworks have been established around trials registration and reporting (e.g., Food and Drug Administration in the United States), low adherence persists (
DeVito et al. 2018).
Results
A total of 6790 clinical trials conducted in Canada were identified. Of those clinical trials, we excluded 70 of them that were submitted to clinicaltrials.gov with one or more incorrect entries and were therefore incompatible with our software (Microsoft Excel). For example, among those 70 studies, a few submitted an “alphabetic” entry despite a strictly “numerical” requirement. Our software could not correct those mistakes, and therefore those studies were excluded. Therefore, a total of 6720 (99%) studies were included in our analysis.
Demographic data
The demographic data of the collected sample are highlighted in
Table 2. The median year of trial primary completion was 2015. Of the 6720 included clinical trials, 1% (
n = 65) had a primary completion date in 2009, 12% (
n = 819) had a primary completion date in 2019. A total of 38% (
n = 2581) of identified trials did not indicate the phase of their study. Fifty-nine percent (
n = 3967) of trials were prospectively registered, while the remaining trials were registered after recruitment started. Moreover, 39% (
n = 2642) of all trials made their results available in the registry. The National Institutes of Health (NIH) was identified as the lead sponsor in only 0.8% (
n = 57) of all trials, which were then included in the “Industry” category, as can be seen in
Table 2.
Primary completion date
Outlined in
Table 3 are all the primary outcomes we measured based on the year of primary completion. From 2009 to 2019, there were an increasing number of studies reaching primary completion. There was an increasing trend from 2009 to 2019 in the prevalence of prospective registration: 35% in 2009 and 73% in 2019. However, there is no trend observed in the reporting of results across the years: 34% in 2009 and 32% in 2019. Besides the first year examined in 2009, we did not observe a trend in the publication of study findings from 2010 to 2014. Overall, there was a slight upward trajectory in adherence to all three practices, mostly explained by the increase in prospective registration and publication of findings.
Lead sponsor, phase of study, total enrollment, and countries implicated
Table 4 outlines all the primary outcomes based on lead sponsor, total patient enrollment, phase of study, and clinical trial site location. Overall, 59% (
n = 3967) of trials were prospectively registered, 39% (
n = 2642) had their results reported in the registry, and 55% (
n = 3724) had their findings published. One-third (
n = 2138) of trials did not have their results available to the public in any form—i.e., the results were not reported in the registry nor were the findings published.
Trials with an “Industry” lead sponsor had higher rates of prospective registration, reporting of results, and publication of study findings than trials with an “Academia” lead sponsor. Overall, clinical trials led by the “Industry” had a 36% (n = 1182) adherence to all three best practices, while clinical trials led by the “Academia” had an adherence of only 5% (n = 179). A univariable analysis determined that the odds of having prospective registration with an “Academia” lead sponsor decrease by 56% when compared with “Industry” (OR = 0.44; 95%CI: 0.40–0.49). Moreover, the odds of result reporting are 93% lower (OR = 0.07; 95%CI: 0.06–0.08), and publication of findings is 13% lower (OR = 0.87; 95%CI: 0.79–0.96). Overall, adherence to all three practices concurrently is 90% lower in “Academia” than in “Industry” (OR = 0.1; 95%CI: 0.09–0.12).
There was a higher adherence to study registration and reporting best practices based on the size of the clinical trial. Of the clinical trials with over 500 participants, 48% (n = 502) adhered to all three practices. Clinical trials with fewer than 100 participants had an overall adherence rate of only 8% (n = 274). A univariable analysis determined that the odds of prospective registration of trials with <100 participants decreased by 59% when compared with trials with >500 participants (OR = 0.41; 95%CI: 0.36–0.48). Moreover, the odds of result reporting are 91% lower (OR = 0.09; 95%CI: 0.08–0.11), and the publication of findings is 80% lower (OR = 0.20; 95%CI: 0.17–0.23). Overall, adherence to all three practices concurrently is 91% lower in trials with <100 participants than in trials with >500 participants (OR = 0.09; 95%CI: 0.08–0.11).
Phase 3 trials had the highest performance in prospective registration, result reporting, and publication of findings in comparison to any other phase. Phase 3 studies have an adherence of 49% (n = 777) to all three practices, as opposed to phase 1 studies with an adherence of 4% (n = 19). A univariable analysis determined that the odds of prospective registration with a phase 1 trial decreased by 59% when compared with phase 3 trial (OR = 0.41; 95%CI: 0.33–0.50). Moreover, the odds of result reporting are 95% lower (OR = 0.05; 95%CI: 0.04–0.06) and the odds of publication of findings are 86% lower (OR = 0.14; 95%CI: 0.11–0.17). Overall, adherence to all three practices concurrently is 96% lower in phase 1 trials than in phase 3 trials (OR = 0.04; 95%CI: 0.03–0.07).
International clinical trials had a higher rate of prospective registration (74%; n = 2193), result reporting (75%; n = 2207), and publication of findings (65%; n = 1924) than trials conducted with exclusively Canadian sites. Overall, international trials had a 42% (n = 1238) adherence to all three practices, and “Canadian Only” trials had an adherence of 3% (n = 123) to all three practices. A univariable analysis determined that the odds of having prospective registration by Canadian trials decreased by 69% when compared with international trial (OR = 0.31; 95%CI: 0.28–0.35). Moreover, the odds of result reporting are 96% lower (OR = 0.04; 95%CI: 0.04–0.05) and the odds of publication of findings are 51% lower (OR = 0.49; 95%CI: 0.45–0.54). Overall, adherence to all three practices concurrently is 95% lower in Canadian trials than in international trials (OR = 0.05; 95%CI: 0.04–0.06).
The remaining results of the variables of interest are in the
Appendix. All the data analyzed, the statistical analyses, and the results pertaining to the top Canadian institutions can be accessed directly via this
link here.
Quality assurance results
A random 10.0% sample (
n = 672) was selected for manual verification of the publication status. As per the downloaded data from clinicaltrials.gov, 56% (
n = 378) of those trials had results published in a medical journal. The quality assurance determined that 70% (
n = 470) of those trials were in fact published. Clinicaltrials.gov underestimated the true prevalence of published clinical trials because some trials were published without an NCT ID (
Dickersin 1990); some trials were published after we downloaded the data (
Song et al. 2010); some trials were published in the form of an Abstract/Thesis/Poster/PhD (
Stern and Simes 1997); and Clinicaltrials.gov was not able to automatically index some publications in certain journals (
Dwan et al. 2013).
Discussion
Of the almost 7000 RCTs in our sample, less than two-thirds were prospectively registered; less than half made their results available; and less than two-thirds published their findings. Less than a quarter of the RCTs completed all three best practices. Trials conducted with exclusively Canadian sites were substantially less compliant with these practices than trials with both Canadian and international sites. Importantly, the trials we describe were not small (
Chan and Altman 2005); nearly a third of the trials included between 100 and 500 participants, and 15% (
n = 1041) of them included over 500 participants.
Of all the variables included in the logistic regression, four were highly associated with study registration and reporting best practices. They were as follows: clinical trials led by “Industry” (pharmaceutical companies) (
Dickersin 1990), phase 3 clinical trials (
Song et al. 2010), trials with over 500 participants (
Stern and Simes 1997), and, finally, clinical trials conducted by a multinational team (
Dwan et al. 2013). The four variables that were negatively associated with study registration and reporting best practices were as follows: clinical trials led by “Academia” (universities) (
Dickersin 1990), phase 1 clinical trials (
Song et al. 2010), trials with fewer than 100 participants (
Stern and Simes 1997), and trials conducted with exclusively Canadian sites (
Dwan et al. 2013).
Some readers will view these results as another example of egregious waste in biomedicine with little improvement since the 2014 landmark Lancet series on research waste (
Kleinert and Horton 2014). Patients, who are critical to the success of clinical trials, are likely to be disappointed with these results; their contributions are not being honored. For clinical practice guideline developers, these results indicate that evidence is missing regarding the totality of knowledge about an intervention. For healthcare funders, these results indicate a bad return on investment. If grantees use scarce resources, often taxpayer dollars, and do not prospectively register their trials and/or make their results available in registries or publish their results, everyone loses. Finally, academic institutions may risk their institutional reputation when their faculty members fail to meet minimum national/global standards (WHO; CIHR).
These results are not unique to Canada. Similar results have been reported elsewhere (
van den Bogert et al. 2016;
Rüegger et al. 2017;
Wieschowski et al. 2019;
Taylor and Gorman 2022). The overall lack of prospective registration and reporting of clinical trial results may reflect a lack of knowledge on the topic on the part of clinical trials teams and/or their academic institutions. Prospective registration and reporting results are part of a larger ecosystem of open science, which includes transparency. Compared with some other parts of the world, Canada has been slow to publicly embrace the practices of open science (
Government of Canada 2022).
The lack of prospective registration may reflect that clinical trial principal investigators are leaving these responsibilities to other team members. Alternatively, funders may not have strong enough adherence monitoring in place. With the advent of automated digital monitoring and the increasing availability of application programming interfaces (APIs) this should be less of a problem. The European Trials Tracker scheme, developed by the University of Oxford's Bennett Institute for Applied Data Science, is an example of existing monitoring on a large scale (
https://www.bennett.ox.ac.uk/) (
Bennett Institute for Applied Data Science 2022). Funders and academic institutions could use digital dashboards to monitor adherence to clinical trial policies and identify training needs (
Cobey et al. 2022).
In the last few years, several initiatives have proposed moving away from traditional metrics of the number of publications (the irony of our results—counting publications—is not lost on us) towards a broader set of best practices that reflect an institutional commitment to research integrity when assessing researchers for hiring, promotion, and tenure (
Hicks et al. 2015;
Moher et al. 2020;
DORA 2022). Current researcher assessment could be augmented by tracking whether faculty members have prospectively registered their trials (and other studies), made their results available on a trial register, and published them (preferably in an open access journal).
It is time for trial funders and academic institutions to collaborate to address the overall lack of adherence of Canadian trials to registration and reporting mandates. In the Canadian context, there is now a requirement for equity, diversity, and inclusion training when applying for grant funding for the CIHR. CIHR has had success in requiring principal investigators to take sex and gender training before they can submit a grant application (
Haverfield and Tannenbaum 2021). Something similar could be introduced for clinical trial registration and results reporting. The training can be required for all faculty and research staff involved in conducting trials. This would portray the funders and academic institutions’ commitment to improving this situation. They could also collectively commit to evaluating such an educational intervention by conducting a stepped wedge and/or cluster trial across universities to ensure the training was having the desired effect.
From the earliest attempts to introduce clinical trial registration in the 1980s, there was a strong belief that it might reduce publication bias and provide a more accurate picture of the estimated benefits and harms of interventions. Our results, and others (
Scherer et al. 2018;
Wieschowski et al. 2019), suggest that publication bias is still a substantive problem despite prevalent views that clinical trials are heavily regulated. Indeed, although regulation via policy exists, if we fail to audit adherence, the policy goals will not be adhered to.
CIHR recently updated their policy guide (see box). The “stick” in this updated guidance is likely to be the lack of future funding for principal investigators unless their current trials are registered, and the results made available. Importantly, the agency will monitor adherence annually “by asking impacted researchers to provide clinical trial registration identifiers, and links to summary results and open access publications”.
Nominated principal investigators receiving CIHR grant funds for clinical trial research after 1 January 2022 must comply with the above requirements to remain eligible for any new CIHR funding”. Despite the recently updated policy guide, the CIHR stated that in 2022, only 57% of the CIHR-funded trials “had registered their clinical trial in a publicly available, free to access, searchable clinical trial registry complying with WHO's international agreed standards before the first visit of the first participant” (
Government of Canada CI of HR 2023). Perhaps, adherence to the new policy guide may only be realized for future CIHR-funded trials once nominated principal investigators lose funding eligibility for their lack of compliance.
Limitations
We relied on the data reported on clinicaltrials.gov. This is the most widely used registration platform (
Zarin et al. 2011), accounting for the vast majority of all clinical trial registrations. To be counted in our analysis, we only tracked forward the trials that were marked as “completed” on clinicaltrials.gov; this means we missed records that were registered and then never updated to be marked as complete despite these trials having ended. The results are likely worse than we report here. Moreover, we only tracked studies that were registered on clinicaltrials.gov in the first place. Had we also analyzed publications that were not registered on clinicaltrials.gov, adherence may have been lower. Furthermore, we did not analyze the length of time it took between the study completion date and the publication date. In other words, some studies may have posted their results and published their findings a decade after the study completion, despite the recommendation to do so within 2 years (
WHO 2022). Despite all of this, less than one-quarter of the sample adhered to all three best practices. Some may argue that trials with non-statistically significant results take longer to publish; however, a recent review found no difference in time to publication based on the statistical significance of the trial (
Jefferson et al. 2016).
Another limitation is that we relied on clinicaltrials.gov regarding the publication of study findings. Ideally, we would have completed the quality assurance for our entire sample; however, due to a resource constraint, we limited the quality assurance to 10% of our sample. As demonstrated by our quality assurance, clinicaltrials.gov has underestimated the true prevalence of publications by roughly 14%. Clinicaltrials.gov reported that 56% (
n = 378) of trials were published, but our quality assurance determined the true number to be 70% (
n = 470). This is in line with a recent study highlighting that over a third of trials they analyzed on clinicaltrials.gov had no results available on either clinicaltrials.gov or PubMed, up to 36 months after their primary completion date (
Nelson et al. 2023). Some publications were missed by clinicaltrials.gov for the following reasons: publication does not include NCT IDs; publication date was after we collected the data; publications were in the form of an Abstract/Thesis/Poster; and, finally, some publications were in journals not accessible by clinicaltrials.gov. Overall, this underestimation of publication status suggests that despite low levels of prospective registration and results reporting, more results from these trials are being published. All three best practices must be adhered to concurrently to maintain a high level of scientific rigor. If findings are published without prospective registration or results reporting, a bias is introduced into the published paper. Therefore, this underestimation of publication status supports our findings that there is room for improvement. Ultimately, our analysis is limited by how comprehensive and detailed the documentation is on clinicaltrials.gov.
In summary, our analysis of nearly 7000 Canadian trials registered on clinicaltrials.gov indicates that there is substantial room for improvement in ensuring they are prospectively registered at inception (prior to the first person being randomized), the results are reported in a publicly accessible registry, and that completed trials are published, preferably in an open access platform/journal. The consequences of not monitoring adherence to these activities are profound and wasteful. The international AllTrials initiative, signed by more than 700 organizations, has brought attention to public support for these policies (
AllTrials 2022). Several Canadian-based foundations have signed this declaration (e.g., Canadian Cancer Research Alliance, Canadian Agency for Drugs and Technologies in Health, Canadian Medical Association, and Canadian HIV Trials Network). Other stakeholders, like Health Canada, and funders of academic trials, like the federal Tri Agency, ought to also commit to the AllTrials initiative and its broader principles. Canada should join the global movement to address research waste due to incompliant registration and reporting of trials and seek to lead in identifying resolutions.