Self-promotion and the need to be first in science
Abstract
Scientists, like all humans, are subject to self-deceptive valuations of their importance and profile. Vainglorious practice is annoying but mostly harmless when restricted to an individual’s perception of self-worth. Language that can be associated with self-promotion and aggrandizement is destructive when incorporated into scientific writing. So too is any practice that oversells the novelty of research or fails to provide sufficient scholarship on the uniqueness of results. We evaluated whether such tendencies have been increasing over time by assessing the frequencies of articles claiming to be “the first”, and those that placed the requirement for scholarship on readers by using phrases such as “to the best of our knowledge”. Our survey of titles and abstracts of 176 journals in ecology and environmental biology revealed that the frequencies of both practices increased linearly over the past half century. We thus warn readers, journal editors, and granting agencies to use caution when assessing the claimed novelty of research contributions. A system-wide reform toward more cooperative science that values humility, and abhors hubris, might help to rectify the problem.
“…the whole point of establishing a science is that it is supposed to be self-correcting and not based on reputation, hierarchy, ignorance, naivete, or self-deceptive bias.”—Robert Trivers (2015), Wild Life: Adventures of an Evolutionary Biologist“We use language to embroider and exaggerate our own dossiers and gently diminish or disparage those of others.”—Mark Pagel (2012), Wired for Culture: Origins of the Human Social Mind
Introduction
Narcissistic scientists have long used the force of their personality to build reputations and power, advance their careers, and unduly influence research policy (Lemaitre 2017). The risk of engaging in vainglorious behaviour, or being perceived to do so, is exacerbated by the advent of the internet, social media, networking, and emphasis on bibliometric indicators. Hiring and grant decisions are increasingly based on questionable quantitative performance metrics (Edwards and Roy 2017; Vanecek and Pecha 2020) that reward prolific authors and those publishing in journals with high impact factors (e.g., Moher et al. 2018). These practices continue even though productive and “novel” science is not necessarily reliable or reproducible (Baker 2016; Ritchie 2020) and that highly ranked journals emphasizing novelty are likely publishing less reliable science than others (Brembs 2019). It is as though the practice originated with a malfeasant designer intent on promoting Planck’s (1949) principle that scientific truths advance through the death of opponents, thus ensuring that no idea arises before its time. The principle is masterfully confirmed in the demonstration that premature deaths of eminent scientists create opportunities for the advancement of knowledge in life sciences (Azoulay et al. 2019).
Self-promotion advocates have been active since at least the 1990s (Reis 1999). Universities, publishers, citation databases, and scientific journals (including this one) regularly assist authors in promoting and “selling” (Vinkers et al. 2015) their work and provide advice on increasing their use of social media and personal branding (e.g., Fiske 2018; Hotez 2018; Cheplygina et al. 2020). Questionable rankings generated from such practices are frequently used by universities as they compete for prestige and students (Edwards and Roy 2017). Such practices modify behaviour (e.g., de Rijcke et al. 2016) and run the risk of so called “post-production” and other forms of misconduct (Biagioli 2016; Seeber et al. 2017; Biagioli et al. 2019) aimed towards enhancing the impact of publications and their authors. Examples range from the pernicious cheat (Biagioli 2016), to self-plagiarism (Horbach and Halffman 2019), to self-citations and citation cartels designed to boost bibliometric indicators (Fister et al. 2016; Seeber et al. 2017; Ioannidis et al. 2019; Van Noorden and Chawla 2019). Lexicographic analyses demonstrate a dramatic 880% four-decade increase in the use of “positive” words in PubMed titles and abstracts with some, such as “novel” and “innovative” increasing by as much as 15,000% (Vinkers et al. 2015). Other studies revealed that male authors are more likely than female authors to describe their research with positive terms (Lerchenmueller et al. 2019).
We wondered whether the increased emphasis on impact and marketing, and widespread gaming of their metrics (e.g., Biagioli and Lippman 2020), might correspond with other changes in the ways that scientists write their papers. There are numerous mechanisms by which scientists might consciously or unintentionally promote themselves and their work subjectively. We were especially interested in two phenomena: (i) claims to be “the first” and (ii) evidence of incomplete or excused scholarship. Statements of novelty and priority not only influence impact, but they can also lead to more cursory assessments of data and evidence (e.g., Editorial 2021) and yield long-term negative consequences for science. Priority races can often evolve towards less reliable discovery as scientists compete to reap incentives and other rewards of research described as being novel or first (Higginson and Munafò 2016; Smaldino and McElreath 2016; Tiokhin et al. 2021). With these points in mind we searched journal titles and abstracts for statements promoting novelty or those that could be interpreted as incomplete scholarship. We restricted our search to the broad fields of ecology, evolution, and environmental biology, because it is the body of literature with which we are most familiar. Although our results apply only to patterns in that literature, it is likely that any time-varying changes in writing style that heighten impact are not limited to the journals, articles, and authors in our sample.
Methods
Data summaries
We began on 8 May 2020 by downloading the 2018 Scimago Journal & Country Rank (SJR) of the 609 journals listed in the “Ecology, Evolution, Behavior and Systematics” category of Scimago’s “Agricultural and Biological Sciences” subject area (scimagojr.com/journalrank.php?category=1105&area=1100&type=j&page=2&total_size=609). Scimago rankings use the Scopus database. We decomposed the data into five sets corresponding to the 100 top-ranked journals and the top-ranked 25 journals in each ranked quartile (of all 609 journals). Doing so allowed us to search for overall patterns in “being the first”, as well as to evaluate whether there was a signal associated with top-ranked versus lower-ranked clusters of journals. We added eight additional journals (Ambio, PLoS One, PLoS Biology, the Proceedings and Philosophical Transactions of the Royal Society (A and B), and the Proceedings of the National Academy of Sciences (PNAS) of the USA (notably PNAS now instructs authors to “not include statements of novelty or priority”; pnas.org/authors/submitting-your-manuscript; accessed 31 August 2021) to the “top 100” to generalize our results beyond ecology and evolution. We then searched the titles and abstracts of each journal in the Web of Science database (1975–2019; final sample = 101 journals; seven of the top 100 SJR-ranked journals were either absent from, or only partially represented in, the database). To attain equal samples for analyses of journal clusters, we compensated for journals absent from the Web of Science database by iteratively extending the rank of journals until we reached 25 journals in each quartile (25 of 25, 25 of 28, 25 of 28, 25 of 64 top-ranked journals in quartiles 1–4, respectively).
We built new data sets using two separate word searches. Using Web of Science, we searched five-year intervals of English titles and abstracts of each journal for occurrences of the phrase “the first” (an estimate of self-promotion, but not necessarily intentional) and separately for either of the phrases “best knowledge” or “our knowledge” (an estimate of possessive but limited scholarship). We screened the titles and abstracts of each occurrence to ensure that use of the terms represented a statement that could be construed as self-promotion or scholarship (Supplementary Material 1). Some titles and abstracts included more than a single occurrence of the corresponding phrase. We counted these relatively few egregious examples as representing separate instances of self-promotion or scholarship. Doing so improves the estimated intensity and frequency of both behaviours. We summed the total number of articles and occurrences within journals across time and separately across the time intervals among journals. We used the sums to calculate the weighted frequencies of occurrence among journals and over time.
The rapid increase in open access publishing might be responsible for any trends that we detected. So we completed our analysis by using the 101 “highest ranked” journals to compare the use of “the first” and “best” between “closed” and “open” access publications. These analyses excluded journals that changed their status from closed to open during the 45-year timeframe of our analyses.
Analyses
We tested for temporal patterns with linear, quadratic, and cubic regressions of the number of published articles and the proportion of occurrences (relative to articles published) over time. We retained only models for which each term in the model was statistically significant (P ≤ 0.05). We then tested for differences in patterns among quartiles in the Web of Science data with two general linear models (GLMs) (for “the first” and “best”) using time as a covariate. We completed our analysis with a third GLM on the 101 “top” journals in which we evaluated possible differences between closed versus open access in a model that also included “the first” or “best”, again with time as a covariate. We did not include interaction terms in the GLMs to avoid collinearities. We conducted all analyses with Minitab® (2019, 2020) (Versions 19 and 20) software.
Results
Our Web of Science title and abstract screening of 176 journals returned a total of 742,993 articles. Of these, we recorded 74,367 occurrences of “the first”, of which 42,585 qualified as examples of claiming novelty (57.3% of occurrences, 5.7% of articles). “Best” occurred much less frequently (6,452 occurrences) but with a high proportion of occurrences representing excused scholarship (6,009; 93.1% of occurrences, 0.81% of articles).
Patterns through time
Four of five regressions revealed an accelerated rate of publications across years (a fifth included a nearly significant and positive quadratic term), and all revealed a linear increase in the use of “the first” through time (Fig. 1, Table 1). No higher model met the selection criterion (P ≤ 0.05) for inclusion. There were, however, significant differences in the proportional use of “the first” among quartiles. Authors were more prone to using “the first” in journals located in quartiles 3 and 4 than they were for journals in quartiles 1 or 2 (Fig. 2, Table 2). The result was qualitatively identical in an analysis that excluded the first three time periods when only one article used “the first”. Use of “best” also increased through time, and often at an accelerating rate (quartiles 1 to 3, Table 1). Significant differences among quartiles were also revealed in the GLM assessing the use of “best” through time, but in this case, authors were least likely to use “best” in quartile 1 journals (Table 2).
Fig. 1.
Table 1.
Model | Equation | Degrees of freedom | F-ratio | R2adj | P |
---|---|---|---|---|---|
Number of articles | |||||
Top 100 | Y = 42628 – 25143X + 4907X2 | 2,6 | 39.4 | 90.6 | <0.001 |
Quartile 1 | Y = 589 – 916.2X + 298.2X2 | 2,6 | 228.6 | 98.3 | <0.001 |
Quartile 2 | Y = 2486 − 812.8X +160.9X2 | 2,6 | 123.1 | 96.8 | <0.001 |
Quartile 3 | Y = 1003 – 273.4X + 83.35X2 | 2,6 | 292.8 | 98.6 | <0.001 |
Quartile 4 | Y = −571.6 + 558.3X | 1,7 | 37.3 | 82.0 | <0.001* |
Proportion using “the first” | |||||
Top 100 | Y = −0.0209 + 0.0106X | 1,7 | 121.5 | 93.8 | <0.001 |
Quartile 1 | Y = −0.0209 + 0.0104X | 1,7 | 73.1 | 90.0 | <0.001 |
Quartile 2 | Y = −0.0190 + 0.0107X | 1,7 | 170.2 | 95.5 | <0.001 |
Quartile 3 | Y = −0.0319 + 0.0156X | 1,7 | 133.8 | 94.3 | <0.001 |
Quartile 4 | Y = −0.0168 + 0.0176X | 1,7 | 18.8 | 68.9 | 0.003 |
Proportion using “best” | |||||
Top 100 | Y = −0.0031 + 0.0016X | 1,7 | 162.1 | 95.3 | <0.001 |
Quartile 1 | Y = 0.00007 – 0.00009X + 0.00002X2 | 2,6 | 76.2 | 95.0 | <0.001 |
Quartile 2 | Y = 0.00002 – 0.00017X + 0.0001X2 | 2,6 | 136.3 | 97.1 | <0.001 |
Quartile 3 | Y = 0.0013 – 0.0010X + 0.0002X2 | 2,6 | 25.8 | 86.1 | 0.001 |
Quartile 4 | Y = −0.0019 + 0.0009X | 1,7 | 27.2 | 76.6 | 0.001 |
Note
R2 adjusted for degrees of freedom. *Quadratic P = 0.057.
Fig. 2.
Table 2.
Source | Degrees of freedom | F-ratio | P |
---|---|---|---|
The first (R2adj = 76.8%) | |||
Quartile | 3 | 8.40 | <0.001 |
Time | 1 | 125.73 | <0.001 |
Error | 31 | ||
Total | 35 | ||
Best (R2adj = 57.7%) | |||
Quartile | 3 | 5.09 | 0.006 |
Time | 1 | 55.63 | <0.001 |
Error | 31 | ||
Total | 35 |
Open versus closed access
The use of “the first” was much more prominent than was use of “best” (mean proportion for “first” = 0.06; mean for “best” = 0.01; Table 3). Differences in the respective frequencies of the two metrics dominated the analysis (P < 0.001) relative to time period (because “first” and “best” have different time-dependent patterns of increase, Table 1) and whether access was open (23 journals) or closed (75 journals; P ≈ 0.1 for both variables, Table 3).
Table 3.
Source | Degrees of freedom | F-ratio | P |
---|---|---|---|
Time period | 1 | 3.24 | 0.097 |
Metric (“first” vs “best” | 1 | 109.32 | < 0.001 |
Access (“open” vs “closed”) | 1 | 3.16 | 0.101 |
Error | 12 | ||
Total | 15 |
Note
None of the interaction terms was statistically significant.
Discussion
Questionable research integrity includes much more than the three sins of fabrication, falsification, and plagiarism (Szomszor and Quaderi 2020). It includes any practice that yields advantages beyond those attributed to true scientific achievements. It is in this context that we are concerned about the increased use of “the first” in ecology, evolution, and environmental biology. We do not suppose that most authors use the term as a conscious form of self-aggrandizement, but neither do we suppose that most authors use it to aid future historians.
We are similarly concerned about the rapid rise in terms related to “the best of our knowledge”. We do not suppose that it relates to a conscious bias in scholarship, but neither do we suppose that most authors’ use of the first person in this context is without some form of intentional or unintentional self-deceptive boasting, including our use of the first person in this and other paragraphs.
It thus appears that increased use of “the first” and “best” represent a small subset of practices increasing the frequency of self-promotion in science. Although such practices mirror an associated societal shift towards narcissism, they also reflect advocacy for the use of social media in science (Darling et al. 2013; Côté and Darling 2018), increased emphasis on individual branding (e.g., McDonnell 2015; Hotez 2018), various forms of gaming and manipulation (Biagioli and Lippman 2020), and other mechanisms to expand influence or seek advantage. A partial list includes self-praise on Twitter, Facebook, Instagram, and other platforms; citation cartels (Fister et al. 2016); inclusion of honors, memberships, and titles in valedictions; authors of convenience, recommending reviewers favourable to one’s interests; labelling potentially “negative reviewers” as biased and incapable of objectivity; and use of writing styles that ignore conflicting literature (or worse, data) or that promote self while denigrating other scientists.
A common, but not necessarily intentionally malfeasant, mechanism is the use of authors as subjects of sentences (as in “authors X and Y used false assumptions to predict” rather than the more accurate statement that “assumptions thought to be true yielded the prediction that… (authors X and Y)”). When used intentionally, the ploy aims to discredit and scorn authors rather than to critically evaluate their results, interpretations, or viewpoints. The practice is self-ingratiating because it supplants reputable authorities with oneself. Such practices should be deemed unacceptable, even in cases where the intent is praise rather than blame. Though nowhere as harmful or pernicious as the willful distribution of untruths on social media, the cost of self-promotion, even when unintended, is too high a price for science to bear.
The practices not only catalyze their users’ apparent impact and reputation, they auto-catalyze self-deceptive importance and influence. Neither best serves the interests of science. We are not criticizing the rightful pride that scientists can and should take in their achievements and contributions. What we are criticizing is the appropriation of that pride, whether conscious or not, for purposes of self-promotion, privilege, and undeserved impact.
The point is not whether reputations, good or bad, are undeserved, but rather whether scientific writing and communication is morphing into a form of self-deception that is as much (and in some cases more) about the authors as it is about the research. Although the examples explored here show consistent increases in use over time, they still represent a small proportion of scientific writing, at least in the fields covered by our survey. More worrisome, however, is that the use of both metrics has been increasing consistently through time. Use of “the first” has been increasing linearly at a rate of about one percent in each 5-year interval. The rate of increase in “best” has been much slower in most journals surveyed, but it has grown exponentially in three of the four quartiles analyzed. Authors have increasingly emphasized their profile through the ways in which they describe their research and scholarship.
Although one cannot use past patterns to predict future use, the exercise provides insight into potential changes in scientific writing. Imagining that an academic lifetime lasts approximately 30 years, and using data from the top 100 journals, a newly hired ecologist can anticipate that roughly 15% of all abstracts and titles will claim to be “the first” by the end of that person’s career (2050). Use of “best” would similarly double from the current value of approximately 1% to 2.3% in 2050.
We have insufficient data to assess how broadly our results, restricted to journals in ecology, evolution, behaviour, and systematics, might apply to other disciplines. The journals, nevertheless, cover an array of research and scholarly interests, and it would be surprising if similar patterns did not emerge elsewhere.
As in many other examples, we have much to learn from Darwin. Recall his definitive test of natural selection “If it could be proved that any part of the structure of any one species had been formed for the exclusive good of another species, it would annihilate my theory, for such could not have been produced through natural selection” (Darwin 1859, p. 201). He could just as easily have written, “to the best of my knowledge no part of the structure of any one species has been formed for the exclusive good of another species.” Please, dear reader, decide which is the strong statement. Which has the greatest opportunity to advance knowledge? And, yes, we are aware that he also wrote “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find no such case” (Darwin 1859, p. 189). However, even here his challenge to natural selection was clear, and his knowledge was fully revealed.
One might wonder whether a greater use of double-blind reviews might help to resolve, if not self-promotion, problems associated with privilege, force of personality, and scientific dominance by the status quo. However, the practice does not eliminate the potential for biased (and sometimes vitriolic) unattributed reviews nor does it eliminate the likelihood that authors might overly promote the novelty of their results within and outside of their publications. Others (e.g., Ritchie 2020) call for a revolutionary version of “open science” modeled on news services in which preprints posted online would be reviewed and graded by an organization independent of publishers. We suggest a less extreme alternative in which scientists cultivate humility, where journals and university press officers tone down the hyperbole, and where reviewers take full responsibility for their comments, suggestions, opinions, and recommendations. Eliminate the hype and let reviewers do, and be rewarded for, their important contribution to the community of scientists. Some will point out that signed reviews put their authors at risk of vindictive payback when submitting their own manuscripts and applications for funding and employment. Those risks are ameliorated in a more open and respectful pursuit of science in which no one, including grant reviewers and selection committees, dons a cloak of anonymity. Regardless, one can imagine that the benefits of a more objective and modest scientific culture outweigh the costs, and that such a culture will weed out bad actors with biased agendas.
We know that our views and analyses will attract their own criticism and that self-promotion will persist (and indeed grow) as long as it reaps real or perceived rewards for those who practice it. Even if self-promotion is unintentional, our scrutiny of tens of thousands of articles reveals a signal that authors, if not boasting, are increasingly writing with a style intended to influence readers’ assessments of novelty and priority. It is tempting to lay blame for that style with demands for originality from journals, granting agencies, and hiring committees. Doing so fails to recognize that reviewers, editors, and selection committees are also authors (but not necessarily business managers). Fair assessment demands that authors describe their contributions with appropriate humility. Until then, the clarion call for reviewers, editors, granting agencies, recruiters, students, and scientific societies must be caveat lector (let the reader beware). Double down when encountering examples of self-praise on social media or statements, press releases, and news reports that hype a study’s potential. Please, dear authors, think twice before including empty self-serving phrases in your writing.
Acknowledgements
We thank Canada’s Natural Sciences and Engineering Research Council for its Undergraduate Student Research Awards to EM and EP, and for its ongoing support of DWM’s research program in evolutionary ecology. S. Palmer helped with data collection. We appreciate comments from two anonymous reviewers that helped us improve this contribution.
References
Azoulay P, Fons-Rosen C, and Graff Zivin JS. 2019. Does science advance one funeral at a time? Economic Review, 109: 2889–2920.
Baker M. 2016. 1,500 scientists lift the lid on reproducibility. Nature, 533: 452–454.
Biagioli M. 2016. Watch out for cheats in citation game. Nature, 535: 201.
Biagioli M, Kenney M, Martin BR, and Walsh JP. 2019. Academic misconduct, misrepresentation and gaming: a reassessment. Research Policy, 48: 401–413.
Biagioli M, and Lippman A, eds. 2020. Gaming the metrics: misconduct and manipulation in academic research. MIT Press, Cambridge, MA.
Brembs B. 2019. Reliable novelty: new should not trump true. PLOS Biology, 17(2): e3000117.
Cheplygina V, Herman F, Albers C, Bielczyk N, and Smeets I. 2020. Ten simple rules for getting started on Twitter as a scientist. PLoS Computational Biology, 16: e1007513.
Côté IM, and Darling ES. 2018. Scientists on Twitter: preaching to the choir or singing from the rooftops? Facets, 3: 682–694.
Darling ES, Shiffman D, Côté IM, and Drew JA. 2013. The role of Twitter in the life cycle of a scientific publication. Ideas in Ecology and Evolution, 6: 32–43.
Darwin, C. 1859. On the origin of species by means of natural selection. John Murray, London.
De Rijcke S, Wouters PF, Rushforth AD, Franssen TP, and Hammarfelt B. 2016. Evaluation practices and effects of indicator use—a literature review. Research Evaluation, 25: 161–169.
Editorial. 2021. Not the first, not the best. Nature Human Behaviour, 5: 175.
Edwards MA, and Roy S. 2017. Academic research in the 21st century: maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science, 34: 51–61.
Fiske P. 2018. Boost your market value. Nature, 555: 275–276.
Fister I Jr, Fister I, and Perc M. 2016. Toward the discovery of citation cartels in citation networks. Frontiers in Physics, 4: 49.
Higginson AD, and Munafò MR. 2016. Current incentives for scientists lead to underpowered studies with erroneous conclusions. PLoS Biology, 14: e2000995.
Horbach SPJM, and Halffman W. 2019. The extent and causes of academic text recycling or ‘self-plagiarism’. Research Policy, 48: 492–502.
Hotez PJ. 2018. Crafting your scientist brand. PLoS Biology, 16: e3000024.
Ioannidis JPA, Baas J, Klavans R, and Boyack KW. 2019. A standardized citation metrics author database annotated for scientific field. PLoS Biology, 17: e3000384.
Lerchenmueller MJ, Sorenson O, and Jena AB. 2019. BMJ, 367: I6573.
Lemaitre B. 2017. Science, narcissism and the quest for visibility. FEBS Journal, 284: 875–882.
McDonnell JJ. 2015. Creating a research brand. Science, 349: 758.
Minitab, LLC. 2019, 2020. Minitab. [online]: Available from minitab.com.
Moher D, Naudet F, Cristea IA, Miedema F, Loannidis JPA, and Goodman SN. 2018. Assessing scientists for hiring, promotion, and tenure. PLoS Biology, 16: e2004089.
Pagel M. 2012. Wired for culture: origins of the human social mind. W.W. Norton & Company, New York.
Planck MK. 1949. Scientific autobiography and other papers. Philosophical Library, New York.
Reis RM. 1999. The need for self-promotion in scientific careers. Chronicle of Higher Education. [online]: Available from chronicle.com/article/The-Need-for-Self-Promotion-in/45602.
Ritchie S. 2020. Science fictions. How fraud, bias, negligence, and hype undermine the search for truth. Metropolitan Books, Henry Holt and Company, New York.
Seeber M, Cattaneo M, Meoli M, and Malighetti P. 2017. Self-citations as strategic response to the use of metrics for career decisions. Research Policy, 48: 478–491.
Smaldino PE, and McElreath R. 2016. The natural selection of bad science. Royal Society Open Science, 3: 160384.
Szomszor M, and Quaderi N. 2020. Research integrity: understanding our shared responsibility for a sustainable scholarly ecosystem. Institute for Scientific Information, Global Research Report, 1–14. ISBN 978-1-9160868-9-0.
Tiokhin L, Yan M, and Morgan TJH. 2021. Competition for priority harms the reliability of science, but reforms can help. Nature Human Behaviour, 5: 857–867.
Trivers R. 2015. Wild life: adventures of an evolutionary biologist. Plympton, Boston, pp. 238.
Vanecek J, and Pecha O. 2020. Fast growth of the number of proceedings papers in atypical fields in the Czech Republic is a likely consequence of the national performance-based research funding system. Research Evaluation: Advance article, rvaa005,
Van Noorden R, and Chawla DS. 2019. Policing self-citations. Nature, 572: 578–579.
Vinkers CH, Tijdink JK, and Otte WM. 2015. Use of positive and negative words in scientific PubMed abstracts between 1974 and 2014: retrospective analysis. BMJ, 2015: 351:h6467
Supplementary material
Supplementary Material 1 (DOCX / 17.5 KB)
- Download
- 17.56 KB
Information & Authors
Information
Published In
FACETS
Volume 6 • Number 1 • January 2021
Pages: 1881 - 1891
Editor: Iain E.P. Taylor
History
Received: 12 July 2021
Accepted: 17 September 2021
Version of record online: 18 November 2021
Copyright
© 2021 Morris et al. This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Data Availability Statement
All relevant data are within the paper and in the Supplementary material.
Key Words
Sections
Subjects
Plain Language Summary
Influencer science
Authors
Author Contributions
DWM conceived and designed the study.
EM and ENP performed the experiments/collected the data.
DWM analyzed and interpreted the data.
DWM contributed resources.
All drafted or revised the manuscript.
Competing Interests
The authors have declared that no competing interests exist.
Metrics & Citations
Metrics
Other Metrics
Citations
Cite As
Douglas W. Morris, Erin MacGillivray, and Elyse N. Pither. 2021. Self-promotion and the need to be first in science. FACETS.
6(): 1881-1891. https://doi.org/10.1139/facets-2021-0100
Export Citations
If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.
Cited by
1. Errors and bias in marine conservation and fisheries literature: Their impact on policies and perceptions
2. Does writing style affect gender differences in the research performance of articles?: An empirical study of BERT-based textual sentiment analysis