Open access

What factors are important to the success of resubmitted grant applications in biomedical and health research? A retrospective study of over 20 000 applications to the Canadian Institutes of Health Research

Publication: FACETS
29 April 2025

Abstract

In this retrospective study, we investigated the outcomes (funded/not funded) and factors related to the funding of resubmitted applications to the Canadian Institutes of Health Research (CIHR) Open Operating Grants Competition and Project Grant Competition between 2010 and 2022. The primary outcome was the proportion of resubmissions and new applications that were funded. Using a random forest model, we explored the importance of variables related to the success of resubmissions. A higher proportion of resubmissions (∼23%) were funded compared to new submissions (∼12%). The most important variables related to resubmission success were the rank (%) and score (/5) given to the preceding application and the number of CIHR-funded grants where the Principal Investigator was a named team member. The least important factors were the language (English, French) of the application and whether the application was reviewed by the same reviewers or review committees. Resubmitting applications to the CIHR Project Grant Competition was beneficial, particularly for projects that were previously highly ranked and received high scores. These results may offer guidance for researchers who are deciding whether to resubmit rejected applications.

Introduction

Few biomedical grant applications are successful (Guthrie et al. 2018a). When an application is rejected, applicants are typically encouraged to revise and resubmit their applications (Lauer 2016; Crow 2020), an expensive and time-consuming task (Herbert et al. 2013; Gross and Bergstrom 2019). Given the low chances of success, researchers may question whether it is worth resubmitting the grant application (Crow 2020). Here, we describe the outcomes of resubmitted applications to the Canadian Institutes of Health Research (CIHR) Open Grants competitions.
In the U.S., resubmitted applications to the National Institute of Health (NIH) programs are more successful than new applications (Lauer 2016; Nakamura et al. 2021), though success rates remain modest (Lauer 2016). Researchers require clear, actionable information to help them decide whether to resubmit a rejected grant application (Derrick et al. 2023), without which they may face multiple rejections from the same competition (von Hippel and von Hippel 2015). For NIH competitions, quantitative results (e.g., scores, ranking) from the peer review of the rejected application are related to whether someone decides to resubmit (Boyington et al. 2016; Hunter et al. 2024) and the success of the resubmission (Lauer 2017). There are few data for other programs and countries, and it is not clear whether data from the NIH can be generalized to other funding systems (Lasinsky et al. 2024). Other factors such as applicants’ sex/gender, their previous funding success, and reviewer and review panel characteristics may also be related to whether a grant is funded (Guthrie et al. 2018a). Some of these factors may be related to how CIHR project grants are assessed (Tamblyn et al. 2018). However, to date, factors related to the success of resubmissions to the CIHR Project Grant Competition have not been explored.
Biomedical science and its benefits to society depend on successful applications for public research funding (Yin et al. 2022). Substantial resources are wasted on repeated unsuccessful applications (Gross and Bergstrom 2019). To help researchers decide whether to resubmit their rejected grant application, this exploratory study analyzed over a decade of resubmission data from the CIHR Open Operating Grant and Project Grant competitions. The aims were to describe the data to (i) identify success rates of resubmitted CIHR operating and project grant applications and (ii) highlight which factors were the most important for resubmission success.

Methods

This was a retrospective cohort study of observational data collected by CIHR between 2010 and 2022. When appropriate, the methods are reported in accordance with the relevant requirements of the Minimum Information about Clinical Artificial Intelligence Modeling (MI-CLAIM) checklist (Norgeot et al. 2020). A completed checklist is included in the Supplementary Material (Table S1). The Behavioural Research Ethics Board at the University of British Columbia approved the study (H23-00719; 31 March 2023).

Context

The CIHR Open Grants competition—which currently accounts for approximately $750 million of CIHR’s $1.3 billion annual funding budget—is open to independent researchers at any career stage who seek funding to support their proposed fundamental or applied health-related research. The Open Grants competition comprises both the Open Operating Grants and the Project Grant competitions, data from which were extracted for 2010–2015 and 2017–2022, respectively. Each Peer Review Committee ranks the applications it considers in a competition in an approach intended to account for varying ways that grant applications are scored by approximately 60 different Peer Review Committees in each competition. The application rank, and not its score, dictates funding decisions.

How CIHR handles resubmitted grant applications

Applicants may submit a previously unfunded application in a subsequent Open Grant competition round. There is currently no limit on the number of times an unsuccessful application can be resubmitted. Although applicants can respond to the previous application’s feedback, CIHR instructs peer review committee members to consider a resubmitted grant application as a new application (i.e., relative to all others in the current competition) and states that “…addressing previous reviews does NOT guarantee that the application will be better positioned to be funded….” A resubmitted application can be reviewed by a different peer review committee than the previously unfunded application and committees do not have access to the previous version of the resubmitted application, though members are asked to read and evaluate the applicant’s response to the previous review (Canadian Institutes of Health Research 2016).

Study design and dataset

Applications submitted to the Open Grant competition were surveyed to identify those that were new applications, unsuccessful, and followed by resubmission to the same program by the same Principal Investigator (PI). Data were extracted from 50 138 applications to the CIHR Open Operating Grant and Project Grant competitions. Applications submitted to the 2016 Open Grant competitions were not included because different reviewing and adjudication methodologies were used at that time. For the random forest model (see Data analysis, below), a randomly selected subset (20% of the data) was held back for model testing (“Test dataset”), and the remaining 80% of data were used for model training (“Training dataset”).

Consent

Every researcher who submitted an application to the CIHR Project Grant Competition consented to the CIHR Policy on “Use of Personal Information”. CIHR maintained a record of all grant applications and the assessment records (including information from the Canadian Common Curriculum Vitae) of researchers who applied for funding. Our study included an objective of quality assurance and quality improvement of CIHR’s programs and thus fell under Article 2.5 of Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans.

Data analysis

We performed an exploratory analysis (Tukey 1980) to (1) describe the proportion of new and resubmitted applications that were funded and (2) identify the importance of variables related to the outcome (funded/not funded) of resubmissions using a random forest model. The primary outcome was the proportion of successful resubmitted and new grant applications (%). Analysis was performed using the R programming language (version 4.3.1) and the randomforest R package (Liaw and Wiener 2002; R Foundation for Statistical Computing 2023)

Data wrangling

The PI’s sex was hypothesized to be a significant explanatory variable (Guthrie et al. 2018a). Applications where the PI had not self-identified their sex (n = 37) were excluded. Applications were identified as resubmissions if they included a response to a previous review and could be paired with a preceding unfunded new application submitted by the same PI. The unit of analysis was application pairs. Resubmissions that could not be paired with a previously unfunded application (n = 9310) were excluded from analyses.

Variable importance

We used a 10-fold cross-validated random forest model to identify the importance of candidate variables related to the outcome of resubmissions. A random forest is an ensemble learning model that combines multiple decision trees, and provides methods to describe the relative importance of explanatory variables, such as the Gini variable importance measure (Breiman 2001; Saarela and Jauhiainen 2021). We used a random forest model because of its ability to derive ranked lists of variables important to the outcome and handle nonlinear relationships and interactions between variables and outcomes. We used partial dependence plots to visualize the marginal effect of important variables on the probability of funding. Plots showing the results of hyperparameter tuning are shown in the Supplementary Material, Fig. S1. The parameter values selected maximized model sensitivity (true positive classification) and specificity (true negative classification) (mtry = 3, nodesize = 1, ntrees = 1000).
The relative importance of each explanatory variable was calculated using the mean decrease of Gini impurity (a method for assessing how effectively the explanatory variables split the data based on the primary outcome) aggregated across the folds. While the primary aim of our study was description, not prediction, for completeness, we also measured the prediction accuracy of the model using the unseen (Test) dataset; these results are shown in the supplementary material.

Explanatory variables

The secondary outcome was the Gini importance of each of the candidate explanatory variables related to resubmission success. Candidate explanatory variables were selected from the factors related to grant resubmissions identified in our recent scoping review (Lasinsky et al. 2024) and other work highlighting possible biases in grant peer review selection (Tamblyn et al. 2018; Guthrie et al. 2018b, 2019) and from factors related to resubmissions in other funding competitions (Boyington et al. 2016). Two variable categories were identified: (i) characteristics of the applicant (the named PI) and (ii) factors related to peer review. Variables included in the final model were: self-identified applicant sex (male, female), previous total CIHR funding awarded in CAD$M to all projects where the PI was a named team member (“PI $ Funding”), the number of CIHR-funded projects that contributed to “PI $ Funding” at the time of application (“PI # grants”), whether the applicant had chosen English or French as the application language, whether the resubmitted application was reviewed by the same peer review committee (true, false), whether the resubmitted application was reviewed by one or more of the same peer reviewers within the peer review committee (true, false), and the score (“Previous Score, out of 5”) and ranking (“Previous % Rank”) of the preceding application.

Results

Proportion of successful applications

The dataset included 40 791 applications, 26 142 of which were new applications and 14 649 of which were resubmissions. During the study period, 3034 (∼12%) new applications and 3415 (∼23%) resubmissions were funded. Figure 1 shows density plots and median values for the scores and rankings for both new and resubmitted applications.
Fig. 1.
Fig. 1. Density plots for the scores (panel A) and rank (panel B) for new and resubmitted applications. Circles and intervals represent the median ± the 66% and 95% quantiles.

Variables related to resubmission success

Table 1 shows descriptive statistics for the explanatory variables related to the Resubmission outcome (funded vs. unfunded) in both the training and test datasets. Relationships between candidate variables are shown in the Supplementary Material, Figs. S2–S4. See “Explanatory variables” above for a description of the variables entered into the model.
Table 1.
Table 1. Explanatory variables related to resubmission outcome (funded vs. unfunded).
 Test datasetTrain dataset
 FundedUnfundedFundedUnfunded
 N = 683= 2246N = 2732N = 8988
Language (N)    
 English676218926868825
 French75746163
Sex (N)    
 Female24885610373296
 Male435139016955692
PI # Grants (median [IQR])8 [12]6 [10]7 [12]6 [10]
Shared reviewer (N)    
 False268124110274998
 True415100517054000
Shared Committee (N)    
 False1124713392102
 True571177523936886
Previous score (/5, median [IQR])4.1 [0.38]3.8 [0.41]4.0 [0.38]3.8 [0.41]
Previous % rank (/100, median [IQR])71 [22]50 [39]69 [25]48 [38]
PI $ funding ($CADM)    
 $038231185818
 $1–<$1 M1566165992418
 $1 M–<$5 M23680410303308
 $5 M–<$10 M1302934531182
 $10 M+1233024651262

Note: PI # Grants (Median [IQR]); the number of PI’s CIHR-funded projects at the time of application, PI $ funding ($CADM); previous total CIHR funding awarded in CAD$M to all projects where the PI was a named team member. CIHR, Canadian Institutes of Health Research; PI, Principal Investigator.

The Gini variable importance data (from the training dataset) is shown in Fig. 2. The % rank assigned during peer review to the previous submission was the most important explanatory variable, followed by the previously assigned score (Fig. 3). Application language (English, French) was the least important feature of the model. Partial dependence plots showing the marginal effects of previous rank, previous score and number of grants held by the PI on the probability of funding are shown in Fig. 4.
Fig. 2.
Fig. 2. The candidate features entered into the random forest model, ranked by the mean decrease in Gini impurity (“Gini importance”). PI # Grants; the number of PI’s CIHR-funded projects at the time of application, PI $ funding; previous total CIHR funding awarded in CAD$M to all projects where the PI was a named team member. PI, Principal Investigator; CIHR, Canadian Institutes of Health Research.
Fig. 3.
Fig. 3. Density plots for the previous application’s score (Panel A) and rank (Panel B) for unfunded and funded resubmitted applications (combined for test and training datasets). Circles and intervals represent the median ± the 66% and 95% quantiles.
Fig. 4.
Fig. 4. Partial dependence plots showing the relationship between important variables and the probability of funding. (A) Probability of funding as a function of the previous percentage rank. (B) Probability of funding by the previous score. (C) Probability of funding as a function of the number of previous grants awarded to the Principal Investigator.

Discussion

Our results add to the literature on health research grant funding and peer review in two ways. First, we show that resubmitted applications to the CIHR Project Grant Competition were funded more often and ranked and scored higher than new applications. Second, we show that the peer review ranking of the previous application was the most important variable related to whether a resubmitted application was funded. Applicant sex, application language, and peer review continuity were the least important. Applicants who are considering resubmitting an unsuccessful application should feel encouraged by improved peer review outcomes for resubmissions and may wish to consider the ranking of their previous application when deciding whether to resubmit a grant application to the CIHR Project Grant Competition.
Matching data from the U.S. NIH (Lauer 2016; Nakamura et al. 2021), here we found a greater proportion of resubmitted applications were funded than new applications, and on average, resubmissions received a higher rank and score. We suggest three potential drivers of improved outcomes: First, it is possible that those who chose to resubmit were those who could adequately respond to the peer review feedback. Second, because CIHR treats resubmissions as new applications and instructs reviewers to compare them only to their present cohort, those who resubmit may be those whose application was more likely to be funded anyway (i.e., the application may be good but not quite good enough compared to the previous cohort it was initially judged against). Finally, a proportion of applicants whose applications received unfavourable reviews may choose not to resubmit, increasing the proportion of resubmissions that are likely to be funded.
Unlike journal peer review, providing applicant feedback is a lower priority of grant peer review committees (Guthrie et al. 2018a). However, applicants are recommended to, and do, use feedback to help with grant resubmission (Derrick et al. 2023; National Institute of Allergy and Infectious Diseases 2023; Hunter et al. 2024). High-quality feedback helps researchers decide whether to submit their application (Derrick et al. 2023) low-quality feedback can confuse applicants (Gallo et al. 2021) and may lead to multiple unsuccessful resubmissions (von Hippel and von Hippel 2015). CIHR has clear guidance for reviewers to promote high-quality reviews (Canadian Institutes of Health Research 2018) and evaluates the performance of peer reviewers (Ardern et al. 2023), which may have contributed to the improved success of resubmissions seen here. Unlike journal peer review (Superchi et al. 2019), there has been little scientific evaluation of the impact of grant peer review feedback quality (see also Derrick et al. 2023), perhaps due to the continuing opacity of many grant peer review systems despite increasing calls for transparency (Horbach et al. 2022; Bouter 2023; Schweiger 2023). Given the substantial time, effort, and financial costs to applicants and society of grant applications, revisions and resubmissions (Herbert et al. 2013; von Hippel and von Hippel 2015; Schweiger 2023), more research is warranted on how reviewer feedback impacts the volume, quality, and success of subsequent grant applications.
The most important factor related to the outcome of a resubmitted CIHR Open Grant competition was the rank given to the previous application. While many applicants may have assumed that the result of the previous peer review was related to the binary outcome of a resubmitted application, this is direct evidence of the relationship. The importance of rank compared to score may surprise some applicants, especially given the relevance of previous scores for resubmissions in other funding systems (Boyington et al. 2016; Lauer 2017; Hunter et al. 2024). For some applicants, knowledge of the relationship between the previous applications’ rank and resubmission success may help them make an informed decision about whether to resubmit a grant application. In particular, the relationships between previous rank and score, and the probability of funding success, shown in Fig. 4, may be informative. A cautious interpretation of these figures suggests a higher probability of funding success for applications that were previously ranked 60% or above and with scores of 4.0 or above. There was a slightly higher probability of funding for applications with very low previous scores (∼2) than those with previous scores of ∼ 3.5. Lower-scored grants were rarely resubmitted (see Fig. S1 and Table 1), possibly because they were “streamlined” (not discussed) by peer reviewers. Unmeasured factors may have increased the probability of funding for these outliers, and we caution against extrapolating the data. Future research is warranted to understand the motivations and approach of applicants who successfully resubmit very low-scored grants to help guide applicants.
The number of CIHR-funded projects the PI had been awarded was the third most important factor, after previous application rank and score. This result could be interpreted in at least two ways. The first is that grant writing is a skill (Weber-Main et al. 2020), and one might assume that researchers may become more skilled in responding to reviewer comments with experience, especially with successful grants (Guyer et al. 2021). An alternative interpretation is that the result reflects the “Matthew effect” in grant funding wherein previous success begets future success. A small group of previously successful researchers, rewarded more often on that basis, would threaten the assumed meritocracy of research funding systems (Bol et al. 2018). Our observational data do not allow us to disentangle these explanations, though we speculate that both may be true. We found that applicant sex was among the least important factors related to resubmission outcome. However, we cannot conclude that sex bias does not affect the outcome of resubmission, and this topic remains an important avenue for future research (Ceci et al. 2023; Schmaling and Gallo 2023).
Recent innovations in grant peer review including “funding lotteries” (Heyard et al. 2022) and double-blind peer review processes (Qussini et al. 2023) have been designed to reduce inherent bias in grant selection and peer review. Future research should examine whether the relationship between past success and the outcome of resubmissions still exists in applications to competitions that use these mechanisms designed to reduce bias.
This was an exploratory cross-sectional study, which precludes causal inferences about the relationship between the applicant and peer review characteristics and grant resubmission success. We echo previous calls for further examination of grant peer review systems, including randomized controlled trials, to examine the causal factors that influence funding success (Grant 2017; Severin and Egger 2021; Horbach et al. 2022). We were unable to study the influence of many often-reported biases in grant systems. For example, racial disparity in grant peer review and awards is well documented. However, because these data were not routinely collected by CIHR during the timeframe under consideration, we were unable to include self-identified race or ethnicity as factors in this analysis.

Conclusion

Resubmitted applications to the CIHR Project Grant Competition were, on average, funded more often and ranked higher than new submissions. The most important factor related to whether a resubmission was funded was the percent rank assigned to the previous unfunded application. Resubmission may be worthwhile, as long as the initial application was well reviewed and applicants can adequately respond to reviewer feedback. These data help increase the transparency of grant peer review and strengthen recent calls for increased scientific analysis of scientific funding systems.

References

Ardern C.L., Martino N., Nag S., Tamblyn R., Moher D., Mota A., Khan K.M. 2023. Three years of quality assurance data assessing the performance of over 4000 grant peer review contributions to the Canadian Institutes of Health Research Project Grant Competition. FACETS, 8: 1–14.
Azoulay P., Li D. 2020. Scientific Grant Funding.
Bendiscioli S. 2019. The troubles with peer review for allocating research funding. EMBO Reports, 20(12): e49472.
Bol T., de Vaan M., van de Rijt A. 2018. The Matthew effect in science funding. Proceedings of the National Academy of Sciences, 115(19): 4887–4890.
Bouter L. 2023. Why research integrity matters and how it can be improved. Accountability in Research, 31(0): 1277–1286.
Boyington J.E.A., Antman M.D., Patel K.C., Lauer M.S. 2016. Towards independence: resubmission rate of unfunded National Heart, Lung, and Blood Institute R01 research grant applications among early stage investigators. Academic Medicine, 91(4): 556–562.
Breiman L. 2001. Random forests. Machine Learning, 45(1): 5–32.
Canadian Institutes of Health Research. 2016. Project Grant Program: Application Process. Available from https://cihr-irsc.gc.ca/e/49806.html#a2.2 [accessed 5 January 2024].
Canadian Institutes of Health Research. 2018. Review Quality. Available from https://cihr-irsc.gc.ca/e/50787.html [accessed 5 January 2024].
Ceci S.J., Kahn S., Williams W.M. 2023. Exploring gender bias in six key domains of academic science: an adversarial collaboration. Psychological Science in the Public Interest, 24(1): 15–73.
Crow J.M. 2020. What to do when your grant is rejected. Nature, 578(7795): 477–479.
Derrick G.E., Zimmermann A., Greaves H., Best J., Klavans R. 2023. Targeted, actionable and fair: reviewer reports as feedback and its effect on ECR career choices. Research Evaluation, 32, 648.
Gallo S.A., Schmaling K.B., Thompson L.A., Glisson S.R. 2021. Grant review feedback: appropriateness and usefulness. Science and Engineering Ethics, 27(2): 18.
Grant J. 2017. The allocation of scientific grants should be a science. Times Higher Education (THE). Available from https://www.timeshighereducation.com/opinion/allocation-scientific-grants-should-be-science.
Gross K., Bergstrom C.T. 2019. Contest models highlight inherent inefficiencies of scientific funding competitions. PLoS Biology, 17(1): e3000065.
Guthrie S., Ghiga I., Wooding S. 2018a. What do we know about grant peer review in the health sciences?: an updated review of the literature and six case studies. RAND Corporation.
Guthrie S., Ghiga I., Wooding S. 2018b. What do we know about grant peer review in the health sciences? F1000Research, 6: 1335.
Guthrie S., Rincon D.R., McInroy G., Ioppolo B., Gunashekar S. 2019. Measuring bias, burden and conservatism in research funding processes.
Guyer R.A., Schwarze M.L., Gosain A., Maggard-Gibbons M., Keswani S.G., Goldstein AM. 2021. Top ten strategies to enhance grant-writing success. Surgery, 170(6): 1727–1731.
Herbert D.L., Barnett A.G., Clarke P., Graves N. 2013. On the time spent preparing grant proposals: an observational study of Australian researchers. BMJ Open, 3(5): e002800.
Heyard R., Ott M., Salanti G., Egger M. 2022. Rethinking the funding line at the Swiss National Science Foundation: Bayesian ranking and lottery. Statistics and Public Policy, 9(1): 110–121.
von Hippel T., von Hippel C. 2015. To apply or not to apply: a survey analysis of grant writing costs and benefits. PLoS ONE, 10(3): e0118494.
Horbach S., Tijdink J.K., Bouter L. 2022. Research funders should be more transparent: a plea for open applications. Royal Society Open Science, 9(10): 220750.
Hunter C.J., Leiva T., Dudeja V. 2024. The unfunded grant, now what? Advice, approach, and strategy. Surgery, 175(2): 317–322.
Lasinsky A., Wrightson J., Khan H., Moher D., Kitchin V., Khan K., Ardern C.L. 2024. If at First You Don't Succeed: Biomedical Research Grant Resubmission Rates, and Factors Related to Success—A Scoping Review.
Lauer M. 2016. Are you on the fence about whether to resubmit? Available from https://nexus.od.nih.gov/all/2016/10/28/are-you-on-the-fence-about-whether-to-resubmit/ [accessed 30 December 2023].
Lauer M. 2017. Resubmissions revisited: funded resubmission applications and their initial peer review scores. NIH Extramural Nexus. Available from https://nexus.od.nih.gov/all/2017/02/17/resubmissions-revisited-funded-resubmission-applications-and-their-initial-peer-review-scores/ [accessed 4 January 2024].
Liaw A., Wiener M. 2002. Classification and regression by randomForest. R News, 2(3).
Nakamura R.K., Mann L.S., Lindner M.D., Braithwaite J., Chen M.C., Vancea A. 2021. An experimental test of the effects of redacting grant applicant identifiers on peer review outcomes. Elife, 10: e71368.
National Institute of Allergy and Infectious Diseases. 2023. Revise and resubmit an application Available from https://www.niaid.nih.gov/grants-contracts/revise-resubmit-application [accessed 5 January 2024].
Norgeot B., Quer G., Beaulieu-Jones B.K., Torkamani A., Dias R., Gianfrancesco M., et al. 2020. Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist. Nature Medicine, 26(9): 1320–1324.
Qussini S., MacDonald R.S., Shahbal S., Dierickx K. 2023. Blinding models for scientific peer-review of biomedical research proposals: a systematic review. Journal of Empirical Research on Human Research Ethics, 18(4): 250–262.
R Foundation for Statistical Computing. 2023. R: a language and environment for statistical computing.
Saarela M., Jauhiainen S. 2021. Comparison of feature importance measures as explanations for classification models. SN Applied Sciences, 3(2): 272.
Schmaling K.B., Gallo S.A. 2023. Gender differences in peer reviewed grant applications, awards, and amounts: a systematic review and meta-analysis. Research Integrity and Peer Review, 8(1): 2.
Schweiger G. 2023. Can't we do better? A cost-benefit analysis of proposal writing in a competitive funding environment. PLoS ONE, 18(4): e0282320.
Severin A., Egger M. 2021. Research on research funding: an imperative for science and society. British Journal of Sports Medicine, 55(12): 648–649.
Superchi C., González J.A., Solà I., Cobo E., Hren D., Boutron I. 2019. Tools used to assess the quality of peer review reports: a methodological systematic review. BMC Medical Research Methodology, 19(1): 48.
Tamblyn R., Girard N., Qian C.J., Hanley J. 2018. Assessment of potential bias in research grant peer review in Canada. Canadian Medical Association Journal, 190(16): E489–E499.
Tukey JW. 1980. We need both exploratory and confirmatory. The American Statistician, 34(1): 23–25.
Weber-Main A.M., McGee R., Boman K.E., Hemming J., Hall M., Unold T., et al. 2020. Grant application outcomes for biomedical researchers who participated in the National Research Mentoring Network's Grant Writing Coaching Programs. PLoS ONE, 15(11): e0241851.
Yin Y., Dong Y., Wang K., Wang D., Jones B.F. 2022. Public use and public funding of science. Nature Human Behaviour, 6(10): 1344–1350.

Supplementary material

Supplementary Material 1 (DOCX / 1.13 MB).

Information & Authors

Information

Published In

cover image FACETS
FACETS
Volume 102025
Pages: 1 - 8
Editors: Paul Dufour and Yann Joly

History

Received: 17 June 2024
Accepted: 13 February 2025
Version of record online: 29 April 2025

Data Availability Statement

Upon publication, the notebook containing the analysis code will be available on the Open Science Framework (https://osf.io/pw45z/). The data used in this analysis are held by CIHR and are not publicly available due to privacy and legal restrictions. Researchers wishing to obtain access to these data need to contact the Vice-President of Research Programs-Operations at CIHR (support-soutien @cihr-irsc.gc.ca) to obtain approval to access de-identified data on operating grant funding program applications submitted between 2010 and 2022. Data for unfunded applications cannot be shared.

Key Words

  1. CIHR
  2. research funding
  3. grant resubmission
  4. peer review
  5. biomedical research
  6. funding success rates

Sections

Subjects

Authors

Affiliations

Department of Family Practice, Faculty of Medicine, University of British Columbia, Vancouver, Canada
Author Contributions: Formal analysis, Methodology, Visualization, Writing – original draft, and Writing – review & editing.
A. Lasinsky
School of Kinesiology, Faculty of Education, University of British Columbia, Vancouver, Canada
Author Contributions: Conceptualization, Methodology, and Writing – review & editing.
R.R. Snell
Canadian Institutes of Health Research (CIHR), Ottawa, Canada
Author Contributions: Conceptualization, Formal analysis, Methodology, Visualization, and Writing – review & editing.
M. Hogel
Canadian Institutes of Health Research (CIHR), Ottawa, Canada
Author Contributions: Conceptualization, Formal analysis, Methodology, and Writing – review & editing.
A. Mota
Canadian Institutes of Health Research (CIHR), Ottawa, Canada
Author Contributions: Conceptualization, Methodology, and Writing – review & editing.
K.M. Khan
Department of Family Practice, Faculty of Medicine, University of British Columbia, Vancouver, Canada
School of Kinesiology, Faculty of Education, University of British Columbia, Vancouver, Canada
Canadian Institutes of Health Research (CIHR), Ottawa, Canada
Centre for Aging SMART, University of British Columbia, Vancouver, Canada
Author Contributions: Conceptualization and Writing – review & editing.
Centre for Aging SMART, University of British Columbia, Vancouver, Canada
Department of Physical Therapy, Faculty of Medicine, University of British Columbia, Vancouver, Canada
Sport and Exercise Medicine Research Centre, La Trobe University, Melbourne, Australia
Author Contributions: Conceptualization, Methodology, Supervision, and Writing – review & editing.

Author Contributions

Conceptualization: AL, RRS, MH, AM, KMK, CLA
Formal analysis: JGW, RRS, MH
Methodology: JGW, AL, RRS, MH, AM, CLA
Supervision: CLA
Visualization: JGW, RRS
Writing – original draft: JGW
Writing – review & editing: JGW, AL, RRS, MH, AM, KMK, CLA

Competing Interests

Authors AM, MH, and RS are CIHR employees. At the time of submission, author KK was the scientific director of the CIHR Institute of Musculoskeletal Health and Arthritis (CIHR-IMHA).

Funding Information

Canadian Institutes of Health Research: Research, Operating, Grant, (Scientific, Directors)
This work was funded in part by the CIHR Research Operating Grant (Scientific Directors) held by KK. CIHR’s Funding Analytics team coordinated data collection, management, and analysis as part of its mandate to foster and deliver high-quality peer review for health research in Canada. CIHR did not participate in interpreting the data or preparing the first draft of the manuscript. CIHR reviewed and approved the manuscript prior to submission. CIHR did not participate in the decision to submit the manuscript for publication.

Metrics & Citations

Metrics

Other Metrics

Citations

Cite As

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

There are no citations for this item

View Options

View options

PDF

View PDF

Figures

Tables

Media

Share Options

Share

Share the article link

Share on social media