Knowledge in the dark: scientific challenges and ways forward

Publication: FACETS
26 August 2019

Abstract

A key dimension of our current era is Big Data, the rapid rise in produced data and information; a key frustration is that we are nonetheless living in an age of ignorance, as the real knowledge and understanding of people does not seem to be substantially increasing. This development has critical consequences, for example it limits the ability to find and apply effective solutions to pressing environmental and socioeconomic challenges. Here, we propose the concept of “knowledge in the dark”—or short: dark knowledge—and outline how it can help clarify key reasons for this development: (i) production of biased, erroneous, or fabricated data and information; (ii) inaccessibility and (iii) incomprehensibility of data and information; and (iv) loss of previous knowledge. Even in the academic realm, where financial interests are less pronounced than in the private sector, several factors lead to dark knowledge, that is they inhibit a more substantial increase in knowledge and understanding. We highlight four of these factors—loss of academic freedom, research biases, lack of reproducibility, and the Scientific tower of Babel—and offer ways to tackle them, for example establishing an international court of arbitration for research and developing advanced tools for research synthesis.

Introduction

The quote from John Naisbitt, “we are drowning in information but starved for knowledge” (Naisbitt 1982, p. 24), is more applicable today than ever before. Thanks to smartphones and similar devices, we have instant access to enormous amounts of data and information. At the same time, we seem to lack the capacity to transform available information into knowledge that would allow us to make important decisions in our daily lives on topics such as health care or economic investments (Ungar 2008). Evidence suggests that the general knowledge of individuals has not increased in the way that overall information and data have increased—a phenomenon termed the knowledge–ignorance paradox (Putnam 2000; Ungar 2008; Schulz et al. 2010; Schulz 2012; Millgram 2015). Proctor (2016) has called the current era the “age of ignorance”; conspiracy theories and rumors thrive in the World Wide Web’s echo chambers (Butter 2018), and today’s societies are increasingly seen as “post-truth societies” in which truth has partly lost its value and importance to people (Higgins 2016; Viner 2016).
There are different perspectives and definitions about “knowledge” and related terms such as “reality” (Boghossian 2007; Rowley 2007; Moon and Blackman 2014; Nagel 2014). Assuming that an objective reality exists, our usage of the term knowledge follows the knowledge pyramid where data are on the bottom, information is in the middle, and knowledge and understanding are on top (cf. Ackoff 1989; Rowley 2007). Different versions of this pyramid exist, for example “understanding” is sometimes left out or included in knowledge. We use knowledge in the broad sense here, including understanding. We avoid a narrow definition of knowledge, as indeed the concept of “knowledge in the dark” outlined below applies to various knowledge definitions. However, we focus on knowledge of individual people rather than collective knowledge. Of course, individual and collective knowledge are interrelated, and key points outlined below also apply to collective knowledge, yet a detailed comparison of individual versus collective knowledge is beyond the scope of the current article.
As illustrated in the knowledge pyramid, knowledge requires the reflection and interpretation of data and information, i.e., it is evidence based. This is not restricted to scientific evidence, but includes data and information generated in other professions or domains, as well as experience of indigenous people or other local residents (cf. Wynne 1992; Funtowicz and Ravetz 1993; Kleinman and Suryanarayanan 2012; Yeh 2016). When reflecting on and interpreting data and information about a given topic, people can become knowledgeable about this topic. Such knowledge enables them to, for instance, better predict the consequences of important decisions related to this topic—and act accordingly, for example during elections. This is not the case for data and information per se. The latter are only truly useful if people can transform them into knowledge. Here, we focus on desirable knowledge, as humans do not want to know everything (e.g., Gigerenzer and Garcia-Retamero 2017).
The observation that we are living in a time where data and information, and thus potential knowledge, keep accumulating, but where the real knowledge of people does not keep up, is frustrating. Science’s primary goal is to advance knowledge, thus we are currently falling short of our mission. At the same time, we are facing a sizable risk that science is losing trust, and thus its role as a counselor for evidence-based decision-making (Pielke 2007) in societies across the globe (Kitcher 2011). Indeed, we observe an increasing gap between evidence and people’s judgment (Funk and Rainie 2015) partly for economic and ideological reasons.
We, the authors of this article, are natural scientists who have discussed this topic in depth with colleagues from various disciplines and put it into a broader context. Based on these discussions and reflections, we have developed the concept of knowledge in the dark that we consider useful for stimulating discussions about the pivotal role science plays in our societies, and it may help improve our ability to make effective decisions that are important for us as individuals and societies. Here, we outline this concept and then apply it to the academic realm. Various aspects of this broader topic have already been extensively dealt with; see for example the existing body of literature on ignorance studies (also known as agnotology; Gross 2007; Proctor and Schiebinger 2008; Kleinman and Suryanarayanan 2012; Gross and McGoey 2015); the relation between knowledge and uncertainty (post-normal science, e.g., Funtowicz and Ravetz 1993; Mode 2 science, e.g., Nowotny et al. 2003); and public understanding of science, public communication of science and technology, and related fields (e.g., Bucchi and Trench 2008; Nisbet and Scheufele 2009; Groffman et al. 2010; McNeil 2013 and references therein). These studies are highly relevant, but in the interest of brevity we do not provide a comprehensive review of them here. Instead, we highlight complementary ideas that have emerged during our discussions over the past years that should be particularly interesting and accessible to natural scientists who form the primary target readership of this article.
In the next section, we provide a conceptual overview of knowledge in the dark with a focus on both laypeople and experts, where we will also clarify in which way this concept builds upon and extends existing terms, concepts, and frameworks. This section will be followed by reasons for knowledge in the dark in academia, while the final section will suggest ways forward to cope with this phenomenon.

Knowledge in the dark

Let us go back to the above-described conundrum that we are living in a time when data and information, and thus potential knowledge, keep accumulating, while the real knowledge of people does not keep up. What we call knowledge in the dark—or short: dark knowledge—is the gap between real and potential knowledge (Fig. 1). This gap can be seen as a lost opportunity and seems to have widened through time. It is a major challenge of our current era and particularly pronounced for inter- and transdisciplinary topics, as knowledge is often trapped in disciplinary silos and professions (Campbell 1969; Ungar 2008; Millgram 2015). At the same time, pivotal environmental, social, and economic challenges urgently need inter- and transdisciplinary solutions.
Fig. 1.
Fig. 1. Left side: Knowledge in the dark is the gap between real and potential knowledge. The latter represents an idealized scenario assuming that knowledge increases if the amount of data and information increase (cf. Fig. 2). The size of the gap between real and potential knowledge is currently unknown and so are the shapes and absolute positions of the lines drawn. We thus added question marks in the graph. There is no y-axis, emphasizing that the graph cannot be quantitatively read. Instead, the relative positions of the lines to each other are important. The curve for potential knowledge is below the one for data and information, as it is not possible to translate all data and information into knowledge (e.g., Ackoff 1989; Rowley 2007). Right side: selected key reasons for knowledge in the dark.
Our use of the term dark knowledge was inspired by “dark matter” in physics and “dark diversity” in biodiversity research. The former is probably well-known to most readers, and the latter describes the gap between potential and actual biodiversity in a given region (Pärtel et al. 2011).
The terms knowledge in the dark or dark knowledge have not yet been applied in the emerging social science field agnotology (Proctor and Schiebinger 2008); only ignorance (the lack of knowledge, cf. Table 1) has been widely used, however, with different meanings (see Gross 2007 for standard terms used in this field). It seems useful to discriminate the different dimensions of ignorance. Dark knowledge includes those dimensions of ignorance that can in principle be reduced. It does not include ignorance that cannot be reduced: we humans cannot know everything. In Pinker’s (1997, p. 561) words: “We are organisms, not angels, and our minds are organs, not pipelines to truth. Our minds evolved by natural selection to solve problems that were life-and-death matters to our ancestors, not … to answer any question we are capable of asking.” Similarly, we humans do not want to know everything (e.g., Gigerenzer and Garcia-Retamero 2017); hence, the concept of dark knowledge focuses on desirable knowledge.
Table 1.
Table 1. Key terms and previous concepts in the context of knowledge in the dark, and how we define them here.
TermDefinitionSelected reference(s)
Dark knowledge
Short for knowledge in the dark
This article (Fig. 1)
Evidence
Data and information, either scientific or generated in other professions or domains, as well as experience of indigenous people or other local residents.
Ignorance
The lack of knowledge; includes knowledge in the dark. Please note that this colloquial meaning differs from how social scientists sometimes use ignorance: it can also mean knowledge about the limits of knowledge.
Ignorance studies
Also known as agnotology. Social science field focusing on ignorance.
Knowledge
Knowledge (includes understanding) requires the reflection and interpretation of data and information, i.e., it is based on evidence (following the knowledge pyramid). For instance, knowledge allows to better predict the consequences of important decisions and to act accordingly. We here focus on desirable knowledge, as humans do not want to know everything.
Knowledge-ignorance paradox
The general knowledge of individuals has not increased in the way that overall information and data have increased.
Knowledge in the dark
The gap between real and potential knowledge. It is limited to those dimensions of ignorance that humans (a) can in principle and (b) want to reduce (humans cannot and do not want to know everything).

Note: Bold text refers to other terms defined in the table.

Thus, dark knowledge is a particular part of ignorance for which a specific term (and definition) has been lacking thus far. Indeed, it is of high practical relevance, as it focuses on those dimensions of ignorance that humans both can (in principle) and want to reduce (Table 1). In this way, the concept might be of interest for researchers in the field of agnotology. It should also be useful for the fields of public understanding of science, public communication of science and technology, and related areas. Relevant works here have shown that engaging with the public, which includes an open dialogue between scientists and other stakeholders, is much more effective than a one-way communication effort from scientists to the public (e.g., engagement vs. deficit model; Nisbet and Scheufele 2009; Groffman et al. 2010; McNeil 2013; Smith et al. 2013). Dark knowledge can only be tackled thanks to such insights. Important measures in addition to engaging with the public are outlined in the section on ways forward. Some mechanisms leading to dark knowledge are related to uncertainty, which is key for post-normal science and Mode 2 science (Funtowicz and Ravetz 1993; Nowotny et al. 2003).
The concept of dark knowledge also highly benefits from other points put forward by social scientists, for example the importance of considering research biases (see below for details) or that science has no monopoly on evidence, as data and information stemming from outside of science can be crucial as well (Wynne 1992; Funtowicz and Ravetz 1993; Kleinman and Suryanarayanan 2012; Yeh 2016); further examples are provided below.
As scientists, we need to be aware of roadblocks for our endeavor to advance knowledge and focus on those we really can and want to remove. The dark knowledge concept may be helpful in this regard, particularly when we consider key mechanisms underlying dark knowledge—these are the roadblocks we should focus on.
We highlight four of these mechanisms here (Fig. 1, right). They are aligned with consecutive steps making up the process of knowledge production: how data and information are (i) produced or not, (ii) made available or not, (iii) are comprehensible or not, (iv) and are remembered or forgotten. The mechanisms differ in their effects on different focal groups, from (a) researchers and other experts in the institution where specific data and information have been generated, to researchers and other experts outside of this institution, but in the (b) same or a (c) similar discipline or profession, and to (d) nonexperts (Fig. 2). In explaining the mechanisms, we draw from findings across various disciplines, e.g., social sciences (including agnotology) or economics.
Fig. 2.
Fig. 2. How different key reasons underlying knowledge in the dark affect (i) the amount of data and information and (ii) real knowledge of different focal groups, in comparison to an idealized reference scenario assuming that knowledge increases if the amount of data and information increase (cf. potential knowledge in Fig. 1). The intensity of the effect on the knowledge of the different focal groups is indicated in grey where dark grey represents a strong effect (i.e., a large gap between potential and real knowledge), and light grey represents a weak effect (i.e., a narrow gap between potential and real knowledge). Note: 1Researchers and other experts in the institution where data and information were generated might be aware of potential biases and errors in the data and information, hence they can ignore them in such cases, and their knowledge is not reduced by such data and information. Others will not be aware of biases and errors and will thus be misled, which reduces their knowledge on the topic. 2Data and information are typically accessible for researchers and other experts in the institution where they were generated; for researchers and other experts in the same or a similar discipline, profession or knowledge domain, the data and information might be accessible (e.g., if their colleagues from the institution are willing to share them), but nonexperts have typically no access (an exception would be if the researchers producing the data and information follow an open science model). 3Incomprehensibility is particularly severe for nonexperts, but might already affect researchers and experts from a similar discipline or profession than the one within which the data and information were generated. 4The loss of knowledge is illustrated for the case of a discipline that disappeared where the knowledge of researchers and other experts in the discipline is gone; the knowledge of researchers in similar disciplines and of nonexperts is reduced as well, as they cannot benefit from the experts’ knowledge anymore. The amount of data and information themselves are not reduced in this case, although of course they are not fully comprehensible anymore.

Biased, erroneous, or fabricated data and information

First, dark knowledge can be caused by biased, erroneous, or fabricated data and information. For instance, the type of data and information produced can be influenced by financial or sociopolitical interests (Kitcher 2011). When “high-stakes” metrics are applied, which assess the performance of people and at the same time strongly influence their future career, there are incentives to “cream” or fabricate the data used for calculating these metrics; creaming is a strategy to maximize a metric by “excluding cases where success is more difficult to achieve” (Muller 2018, p. 24). For example, schools in Florida and Texas have been shown to reclassify weak students as disabled, thus excluding them from calculating average student achievement levels (this is a high-stakes metric for teachers and school principals; Muller 2018, p. 93).
The production of biased, erroneous, or fabricated data and information can be combined with systematic disinformation leading to doubt and uncertainty. This was, for example, done by the tobacco industry, which successfully distorted the public understanding of tobacco health effects (Oreskes and Conway 2010). Similar strategies have been applied in the context of climate change (Oreskes and Conway 2010), by the sugar industry (Kearns et al. 2015), and by pharmaceutical companies that hide information about their products from the public (Kreiß 2015; Crouch 2016). False information can now be actively spread with so-called bots, i.e., software applications running automated tasks (Howard and Kollanyi 2016; Kollanyi et al. 2016).
Producing biased, erroneous, or fabricated data and information leads, of course, to an increase in the amount of data and information (Fig. 2). Under ideal circumstances, such an increase would augment the amount of knowledge (see idealized line “potential knowledge” in Fig. 1, and idealized scenario in Fig. 2). In the case of biased, erroneous, or fabricated data and information, however, such data and information reduce instead of increase knowledge (Fig. 2). Only researchers or other experts from the institution that generated the data and information might be aware of critical errors; other people are not usually aware of them, thus their understanding of the topic will be severely hampered (Fig. 2).

Inaccessible data and information

The second reason for dark knowledge is inaccessibility of data and information. For example, findings of secret services, the military, and industry are frequently inaccessible to the public and thus do not increase public knowledge (Resnik 2006; Proctor and Schiebinger 2008; Bozeman and Youtie 2017). Looking at Organisation for Economic Co-operation and Development (OECD) countries (for which more comprehensive and comparable data are available than for other countries), expenditures into research and development by the industry and military combined are about three times higher than governmental expenditures for civil research (OECD 2017). Industry investments are particularly high, and these have been increasing through time, whereas governmental expenditures are—in relative gross domestic product terms (GDP)—lower today than they were in the 1980s (Fig. 3). This trend can be called privatization of knowledge. In 2015, Volkswagen had the highest research and development budget of all companies worldwide, which was higher than the United Kingdom’s governmental expenditures for civil research (Fig. S1). Samsung also trumped the United Kingdom’s budget, and Intel and Microsoft trumped Italy’s budget.
Fig. 3.
Fig. 3. Temporal development of relative (as percentage of GDP) industrial and governmental expenditures into R&D in several countries and groups of countries (data from OECD 2017). GDP, gross domestic product; R&D, research and development; JAP, Japan; GER, Germany; USA, United States of America; OECD, Organisation for Economic Co-operation and Development; FRA, France; EU15, the 15 member countries in the European Union prior to 1 May 2004; GBR, Great Britain.
Of course, not all research results from industry, the military or secret services remain hidden from the public. This is, for example, illustrated by the American Department of Defense Congressionally Directed Medical Research Programs (http://cdmrp.army.mil), which originated in 1992 with a focus on breast cancer research and now includes other medical research areas that are not primarily of military interest, but benefit the general public (Young-McCaughan et al. 2002). Nonetheless, a large fraction of the research results from industry, the military, or secret services remains hidden. Companies intend to become economic leaders in their specific domain, and military supports geopolitical power and protects national interests (see also Resnik 2006). Thus, the results that are made public are often biased or selected, for instance to boost sales (e.g., for pharmaceutical products), to avoid legal restrictions (e.g., for tobacco or sugar) or to shape geopolitical decisions (Hartnett and Stengrim 2004; Oreskes and Conway 2010; Kearns et al. 2015; Kreiß 2015; Crouch 2016).

Incomprehensible data and information

The third reason for dark knowledge is that much information is incomprehensible. Even if information is accessible in principle, it can frequently only be understood by researchers and experts from the same discipline or profession, whereas most people find it incomprehensible, for instance, because they do not understand the logic underlying the data or information, or the technical language in which these are outlined (Fig. 2; Millgram 2015; Plavén-Sigray et al. 2017).

Loss of knowledge

Fourth and finally, previous knowledge can be lost. This is, for example, the case when professions or scientific disciplines shrink (e.g., if university positions for this discipline are cut) or completely disappear. Although the literature and other information produced by such disciplines still exist, there is (almost) no one left to make this information fully comprehensible and usable. This mechanism underlying dark knowledge is thus similar to the third; however, there are (almost) no experts anymore who could tap into the literature and information and teach non-experts. Consequently, some of the knowledge that had been produced by these dying disciplines and professions is forever lost. If languages disappear, any related information is similarly lost; and data and information stored in disappearing technologies will also be lost if not transferred to modern technologies. For example, information stored on floppy disks is nowadays increasingly hard to access.
While we outlined general reasons for dark knowledge in this section, we will specify them for academia in the next section and then suggest ways to tackle them. The insights we offer may be transferable to other professions and knowledge domains. Since dark knowledge is a broader societal phenomenon and challenge, we encourage others to join us in advancing the concept of dark knowledge in the future and to apply it in various disciplines and professions.

Knowledge in the dark in academia

We highlight four reasons underlying dark knowledge in academia: loss of academic freedom, research biases, lack of reproducibility, and the Scientific tower of Babel (Fig. 4).
Fig. 4.
Fig. 4. Key underlying reasons for knowledge in the dark in academia and possible ways forward. The general dimensions of knowledge in the dark (left section, from Fig. 1) are related to key challenges in academia (middle section, linkages are indicated by connecting lines). Possible ways forward to tackle each of these challenges are listed in the right section. Please note that the challenge “Loss of scientific disciplines” is not discussed in detail in the text.

Loss of academic freedom

Academic freedom is pivotal for the functioning of democratic societies, because independent and evidence-based knowledge is necessary if we are to cope with the grand challenges our societies are facing. In reality, however, individual researchers and institutions are not always free in what they investigate and teach, even in democratic societies. Dramatic examples, collected by the Scholars at Risk Network at http://monitoring.academicfreedom.info, include researchers who have lost their position or were prosecuted or imprisoned for political or other reasons.
A more subtle reason for a lack of academic freedom is the overuse of quantitative performance indicators such as the h-index, number of publications (in high-impact journals), or amount of grant money obtained. Such metrics have become increasingly popular in evaluating researchers and research institutions (e.g., Weingart 2005; Lawrence 2007; Fischer et al. 2012; Kaushal and Jeschke 2013; Arlinghaus 2014; Hicks et al. 2015; Jeschke et al. 2016; see Muller 2018 for a broader treatment of the topic beyond academia). As a result, researchers focus on topics for which funding is available and that are likely to be published in high-impact journals. This is particularly true if base funding is lacking or if researchers do not have a permanent position. Even in a wealthy country such as Germany, the relative proportion of permanent staff in science and arts at universities was only 17% in 2014 (Buschle and Hähnel 2016).
Moreover, third-party funding is partly steered through politically or economically motivated funding calls, frequently influenced by lobbyists (Kreiß 2015). In addition, private enterprises may exert influence on public research institutions through sponsoring professorships and infrastructure (Kreiß 2015; Crouch 2016), thus potentially further confining academic freedom, independence, and diversity.

Academic research biases

The Matthew effect (Merton 1968) describes the phenomenon that established scientists receive disproportionate credit, whereas lesser known scientists get little credit for their contributions. This “the rich get richer” phenomenon has been corroborated by analyses of scientific collaboration and citation networks (Perc 2014). It favors mainstream research while other topics are being ignored, particularly in a competitive environment with few permanent positions (“undone science”, e.g., Kleinman and Suryanarayanan 2012). Countless examples for topical research biases can be found across disciplines. For instance, much of the research on global change and biodiversity loss has focused on climate change, leaving other critical topics, such as effects of synthetic chemicals or the interaction of biodiversity stressors, poorly studied (Bernhardt et al. 2017; Mazor et al. 2018). Kleinman and Suryanarayanan (2012) used the example of the colony collapse disorder in bees to illustrate that the way research topics are addressed is often biased towards reductionist approaches that ignore the real world’s complexity. Another example comes from economics which is still largely focused on the neoclassical model, whereas approaches such as ecological economics have remained underexplored (Van den Berg 2014).
Similarly, strong geographic biases can be found across research disciplines, since most research is typically concentrated in affluent countries, particularly in North America and Europe. This has direct and severe consequences for human health in other countries, as research on diseases limited to these regions is critically neglected (Kitcher 2011). Biodiversity research also has strong geographic biases towards North America and Europe, even though biodiversity hotspots are primarily located in the Global South (Bellard and Jeschke 2016; Wilson et al. 2016; Tydecks et al. 2018).

Lack of reproducibility, author biases, financial interests

The first report of the Open Science Collaboration, which has performed extensive replicates of earlier studies in psychology, reported an average reproducibility of only 39% for 97 experiments (Open Science Collaboration 2015; see also Prinz et al. 2011; Ioannidis 2012). A similar phenomenon—which has primarily been reported in psychology, but also in other disciplines such as medicine and biology—is that the strength of evidence (e.g., on the efficacy of a given drug or the empirical support for a scientific hypothesis) frequently declines over time (“decline effect”; Lehrer 2010; Schooler 2011; Jeschke et al. 2012).
Low reproducibility and decline effects can have several underlying reasons. Brian Nosek provides an example: “We interpret observations to fit a particular idea; … we have already made the decision about what to do or to think, and our “explanation” of our reasoning is really a justification for doing what we wanted to do—or to believe—anyway” (quoted from Ball 2015). Such motivated reasoning is interlinked with temporary fashions in science. For instance, scientists love new hypotheses, as they promise to move a given research field forward. Scientists thus frequently want to find supporting evidence for a new hypothesis, particularly if it was proposed by themselves. Furthermore, studies supporting a new hypothesis are easier to publish than those supporting established hypotheses. For the latter, the opposite tends to be true, as it has become more interesting to publish contradictory evidence. Such publication biases can thus lead to a decline in empirical support for a given hypothesis over time (Jeschke et al. 2012).
Financial interests may also reduce reproducibility, cause decline effects, and prevent access to data and information. For example, there is evidence that the pharmaceutical, tobacco, and sugar industries have strategically manipulated data and information about their products, particularly when they are brand new and need to be sold on the market to balance development costs (Lexchin et al. 2003; Oreskes and Conway 2010; Lexchin 2012; Kearns et al. 2015; Kreiß 2015; Crouch 2016).

The Scientific tower of Babel

Members of scientific disciplines use particular technical terminology—“jargon”—which is hardly understandable by nonspecialists. Some technical terms are clearly identifiable as jargon, especially if they do not exist outside of the discipline. Other technical terms cannot be readily identified, as the same terms exist in everyday language, yet with another meaning, leading to misunderstandings. For example, we are using the term ignorance here as in everyday language, meaning “the lack of knowledge” (e.g., http://wordnet.princeton.edu); however, when social scientists use the technical term ignorance, they frequently mean “knowledge about the limits of knowledge” (Gross 2007, p. 751).
Technical terms are often helpful in accurately and succinctly writing scientific papers. This is particularly true if the target readership is within the boundaries of the same discipline. Jargon can thus reduce dark knowledge within disciplines; however, it hampers inter- and transdisciplinary work. Analyzing 709 577 abstracts published between 1881 and 2015 from 123 scientific journals, Plavén-Sigray et al. (2017) showed that the use of jargon in scientific texts has increased with time, and concurrently the readability of scientific texts has decreased. A total of 22% of scientific abstracts published in 2015 cannot even be considered readable by graduates from English-language colleges.
The rise of technical terminology is one key reason for the knowledge–ignorance paradox outlined above. Today, people have a high level of specialized knowledge but a relatively low level of general understanding. Knowledge becomes increasingly trapped in disciplines, and people outside a given discipline may become “logical aliens”, i.e., they do not understand the logic and standards of a specific discipline: “if you are an academic employed by a university, and you want to meet a logical alien, you don’t need to walk any further than the other end of the hall—or at most, to an adjacent building on your very own campus” (Millgram 2015, p. 33).

Ways forward in academia

Dark knowledge is a challenge for democratic societies, as these need citizens who can make informed decisions. If people are ill-informed or no longer care about the truth, democracy is at risk and science will basically become irrelevant (Kitcher 2011). To avoid such a pessimistic scenario, what are possible ways forward? We outline five approaches below (summarized in Fig. 4).
In these approaches, we do not explicitly mention public engagement of scientists, although it is implicitly included in some of our suggested solutions. Engaging with the public, including an open and active dialogue with stakeholders, is a key task of scientists, and we refer interested readers to publications where these issues have been treated in detail (e.g., Bucchi and Trench 2008; Nisbet and Scheufele 2009; Groffman et al. 2010; McNeil 2013; Smith et al. 2013).

Open science

Key components of open science are open access to scientific publications, open data, open source and open methodology (Kraker et al. 2011). One of its initiatives aims at FAIR—findable, accessible, interoperable, and reusable—data (Wilkinson et al. 2016). Thus, open science directly tackles one of the key reasons underlying dark knowledge, the inaccessibility of data and information. Related to the more specific challenges in academia outlined in the previous section, open science has great potential in improving research reproducibility (e.g., through open methodology) and reducing biases in which data and information can be found, accessed, and reused for research synthesis (e.g., through the FAIR data principles). Open science is clearly an important step forward and helps to build trust into research.
However, there are also important challenges. First, the public availability of data such as health records, behavioral data, or genomic sequencing information poses a threat to citizens on the part of private companies and (future) governments alike. There has been much research on the re-identification of anonymized data, and many examples of past misuse of such data sets exist (Ohm 2009; O’Doherty et al. 2016). In ecology and conservation biology, information about the location of individuals belonging to endangered or newly described rare species can be used by poachers to find them (Lindenmayer and Scheele 2017). Another potential negative effect is that too many nature lovers will try to find particular animals or plants, with possible negative consequences for the whole ecosystem: too many people may destroy the habitat, and harm the species inhabiting it (Lindenmayer and Scheele 2017).
Second, a thorough discussion of how to deal with private companies using data sets of public research institutions is needed. Open public databases are paid by taxpayers and may be an important source of wealth for private companies, which themselves do not typically share their data with the public; when they do, these data are often biased (see above). In other words, open public databases essentially subsidize certain private companies (cf. Mirowski 2018). It is clear that an open science approach alone will not solve the challenges underlying dark knowledge; thus additional approaches are needed (see below).

Diverse evaluation systems

There is an increasing need to revise the performance metrics of researchers and institutions. As briefly outlined above, the application of few quantitative metrics, focusing on money, publications, and citations, constrains academic freedom and favors mainstream rather than outside-of-the-box research, thus promoting research biases (e.g., the Matthew effect) and incentives for authors to predominantly publish what is currently fashionable in science, whereas other research results might remain unpublished (i.e., author publication biases). Furthermore, it may impede inter- or transdisciplinary research, thus contributing to the challenge of the Scientific tower of Babel (cf. Campbell 1969), and even threaten entire disciplines, in which financial interests, overall number of publications, and citations are low.
There is a clear need to diversify evaluation strategies. Researchers should not always be assessed using the same set of metrics, but different metrics should be applied depending on which type of researcher and which skill is needed at an institution (Weingart 2005; Arlinghaus 2014; Hicks et al. 2015; Jeschke et al. 2016). Otherwise, players (i.e., researchers and heads of institutions) focus on “gaming” metrics rather than on their research. Indeed, maximizing metrics has become an end in itself for many researchers, which is not surprising when these metrics are continuously applied for their evaluation (Lawrence 2007; Hicks et al. 2015). For example, many researchers today primarily think about how they can acquire grant money and how they can get into a high-impact journal. If different metrics are applied by different evaluation committees, researchers may be less worried about maximizing certain metrics, as they do not know which metrics will be used in their case. They can then instead focus on actually creating knowledge.

An international court of arbitration for research

Another promising way forward would be to use existing codes of ethics and responsible conduct in science and research (e.g., www.esa.org/esa/about/governance/esa-code-of-ethics; www.icmje.org/recommendations) and turn parts of them into binding rules (cf. Kaushal and Jeschke 2013; Alberts et al. 2015). Any violations of these rules could be dealt with by an international Court of Arbitration for Research (CARe). A similar system exists for sports, where disputes (e.g., doping) can be settled at the international Court of Arbitration for Sport (CAS), which has three courts (in Lausanne, New York, and Sydney). Perhaps it would be worth trying to have at least one for research as well, either in the form of a court or a similar type of entity, such as an international agency of research integrity.
Such an international entity could serve three functions. First, it could assist in setting standards and stimulate a cross-disciplinary discussion of what constitutes scientific misconduct and what does not (cf. Neuroskeptic 2012). Second, for those few countries that have a similar national-level entity (e.g., Austria or Sweden, www.oeawi.at, www.epn.se/en/start/expert-group-for-misconduct-in-research-at-the-central-ethical-review-boardstar), an international entity could handle revisions of cases that are not resolved nationally. Third, it could ensure independent investigations of judgements about possible cases of misconduct. Such independence is not guaranteed if cases are investigated by the research institutions where they occurred or by journals where a study was published. Also, misconduct by scientists often spans across institutions, countries, and journals. After a group of researchers investigated scientific misconduct on the part of the Japanese bone researcher Yoshihoro Sato over a period of several years, focusing on 33 of his more than 200 papers, they concluded that “investigations of this scale should not be handled by journals or institutions” (Kupferschmidt 2018, p. 639).
Of the challenges outlined above, such a court would mainly tackle (i) loss of academic freedom and (ii) lack of reproducibility, financial interests. Standards and rules can be discussed and implemented to clarify what constitutes misconduct delimiting academic freedom, and potential cases can be handled at the court. Similarly, cases of potential misconduct can be handled that changed the outcome of studies, for example data manipulation, thus making them irreproducible. As outlined above, such misconduct is sometimes driven by financial interests. Of course, the effectiveness of such a court in preventing future cases of misconduct will depend on many factors—a key aspect will be its real power to penalize misconduct.

Advances in research synthesis

The primary goal of research synthesis is to gather, process, and present complex data and information, so that they become more accessible. Indeed, we argue that advances in research synthesis are critical for tackling dark knowledge. For example, systematic reviews and meta-analyses such as those performed by Cochrane (www.cochrane.org) have proven important in synthesizing data and information. However, we need to take further steps (Nakagawa et al. 2019). A promising path forward is an atlas or map of knowledge that will allow people to see where certain research is situated and which lines of research and concepts are (dis-)similar to each other (Bollen et al. 2009; Börner 2010, 2015; Kitcher 2011; Jeschke 2014). Such a map of knowledge will allow nonspecialists to better understand a given discipline and more quickly acquire its knowledge, thus tackling the challenge of the Scientific tower of Babel. Advanced synthesis tools also reveal how data and information delivered by various research fields are important for tackling ecological, social, and economic challenges; they clearly show the need to keep such research fields alive that might be threatened in their existence.
Furthermore, knowledge maps and other synthesis tools can only be successfully developed if scientists of several disciplines and artists work together. For instance, information technologists and statisticians should not only work with experts on the focal research questions, but also with artists or designers who will make sure that the final product (e.g., an online portal) is aesthetically sound and user-friendly. Fortunately, such joint work on advanced research synthesis is increasing, for instance work on visual analytics (Keim et al. 2010), sonification which turns data into sound (Hermann et al. 2011), or the above-mentioned advances in creating knowledge maps (https://hi-knowledge.org). Advanced tools for research synthesis can also help uncover and correct for topological, geographic, or author biases, e.g., by considering potential interests of the funders of a study.

Training the next generation of researchers

Targeted training can also help reduce dark knowledge. Teaching and knowledge centers for data experts and data managers are important (e.g., https://cds.nyu.edu, www.monash.edu/it/our-research/research-centres-and-labs/centre-for-data-science), and we need interdisciplinary training that allows members of different disciplines to talk to and understand each other (Millgram 2015), see for example Campbell’s (1969) fish-scale model.
Additional training is required in critically evaluating information and reducing questionable research practices. Specifically, courses could include analyzing different information sources and teaching methods for distinguishing science from pseudoscience (Boudry and Braeckman 2012). They should address questions such as: What constitutes or should constitute our evidence base? What is the role of evidence-based knowledge in society and political decision-making? For example, the course “Calling Bulls**t: Data Reasoning in a Digital World” by Bergstrom and West at the University of Washington, which started in 2017, is a valuable way forward. Its aim is to teach students “how to think critically about the data and models that constitute evidence in the social and natural sciences” (http://callingbulls**t.org).
Training of future researchers should also build awareness that scientists are not immune to biases that influence their work. A profound understanding of what differentiates responsible research from questionable research practices is necessary (Neuroskeptic 2012; Sijtsma 2016). Questionable rearch practices do not necessarily imply intentional fraud but can include “p-hacking”, as when one repeats an experiment until the desired statistical significance is reached or one ignores outliers in statistical analyses (Neuroskeptic 2012; Head et al. 2015). Such practices of “data cooking” are unfortunately widespread (Fanelli 2009). Importantly, such targeted training needs to benefit future researchers across the globe.

Conclusions

To tackle the challenge of dark knowledge, we need to develop and implement an array of tools. Some of these tools were outlined above with a focus on academia. Additional tools that, for example, increase public engagement and participation in science are clearly needed within and outside of academia to avoid an age of ignorance.

Acknowledgements

We appreciate stimulating discussions with other members of the Dark Knowledge Group in Berlin, particularly input and comments by Elisabeth Marquard, Gabriele Bammer, Martin Enders, Hans-Peter Grossart, Lara Hofner, Lydia Koglin, Simone Langhans, Johannes Müller, Florian Ruland, Ulrike Scharfenberger, and Max Wolf. We additionally appreciate contributions at the session “Open Science, Dark Knowledge: Science in an Age of Ignorance” of the Alpbach Technology Symposium, Austria, in August 2017 (organized by KT and JMJ). We also very much thank Karin Bugow, Fernando Galindo-Rueda, Nicole Klenk, Christoph Kueffer, Paolo Mazzetti, Elijah Millgram, and anonymous reviewers for helpful input. Financial support was received from the Cross-Cutting Research Domain Aquatic Biodiversity of the Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB), the Deutsche Forschungsgemeinschaft (DFG; JE 288/9-1, JE 288/9-2), and the Austrian Federal Ministry of Education, Science and Research (BMBWF).

References

Ackoff RL. 1989. From data to wisdom. Journal of Applied Systems Analysis, 16: 3–9.
Alberts B, Cicerone RJ, Fienberg SE, Kamb A, McNutt M, Nerem RM, et al. 2015. Self-correction in science at work: improve incentives to support research integrity. Science, 348: 1420–1422.
Arlinghaus R. 2014. Are current research evaluation metrics causing a tragedy of the scientific commons and the extinction of university-based fisheries programs? Fisheries, 39: 212–215.
Ball P. 2015. The trouble with scientists: how one psychologist is tackling human biases in science. Nautilus, Issue No. 24 [online]: Available from http://nautil.us/issue/24/error/the-trouble-with-scientists.
Bellard C, and Jeschke JM. 2016. A spatial mismatch between invader impacts and research publications. Conservation Biology, 30: 230–232.
Bernhardt ES, Rosi EJ, and Gessner MO. 2017. Synthetic chemicals as agents of global change. Frontiers in Ecology and the Environment, 15: 84–90.
Boghossian P. 2007. Fear of knowledge: against relativism and constructivism. Clarendon Press, Oxford, UK.
Bollen J, Van de Sompel H, Hagberg A, Bettencourt L, Chute R, Rodriguez MA, et al. 2009. Clickstream data yields high-resolution maps of science. PLoS ONE, 4: e4803.
Börner K. 2010. Atlas of science: visualizing what we know. MIT Press, Cambridge, Massachusetts.
Börner K. 2015. Atlas of knowledge: anyone can map. MIT Press, Cambridge, Massachusetts.
Boudry M, and Braeckman J. 2012. How convenient! The epistemic rationale of self-validating belief systems. Philosophical Psychology, 25: 341–364.
Bozeman B, and Youtie J. 2017. The strength in numbers: the new science of team science. Princeton University Press, Princeton, New Jersey.
Bucchi M, and Trench B (Editors). 2008. Handbook of public communication of science and technology. Routledge, Abingdon, UK.
Buschle N, and Hähnel S. 2016. Hochschulen auf einen Blick. Statistisches Bundesamt, Wiesbaden, Germany.
Butter M. 2018. “Nichts ist, wie es scheint”: Über Verschwörungstheorien. Suhrkamp, Berlin, Germany.
Campbell DT. 1969. Ethnocentrism of disciplines and the fish-scale model of omniscience. In Interdisciplinary relationships in the social sciences. Edited by M Sherif and CW Sherif. Aldine, Chicago, Illinois. pp. 328–348.
Crouch C. 2016. The knowledge corrupters: hidden consequences of the financial takeover of public life. Polity Press, Cambridge, UK.
Fanelli D. 2009. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 4: e5738.
Fischer J, Ritchie EG, and Hanspach J. 2012. Academia’s obsession with quantity. Trends in Ecology & Evolution, 27: 473–474.
Funk C, and Rainie L. 2015. Public and scientists’ views on science and society. Pew Research Center, Washington, D.C. [online]: Available from www.pewinternet.org/2015/01/29/public-and-scientists-views-on-science-and-society.
Funtowicz SO, and Ravetz JR. 1993. Science for the post-normal age. Futures, 25: 739–755.
Gigerenzer G, and Garcia-Retamero R. 2017. Cassandra’s regret: the psychology of not wanting to know. Psychological Review, 124: 179–196.
Groffman PM, Stylinski C, Nisbet MC, Duarte CM, Jordan R, Burgin A, et al. 2010. Restarting the conversation: challenges at the interface between ecology and society. Frontiers in Ecology and the Environment, 8: 284–291.
Gross M. 2007. The unknown in process: dynamic connections of ignorance, non-knowledge and related concepts. Current Sociology, 55: 742–759.
Gross M, and McGoey L (Editors). 2015. Routledge international handbook of ignorance studies. Routledge, London, UK.
Hartnett SJ, and Stengrim LA. 2004. “The whole operation of deception”: reconstructing President Bush’s rhetoric of weapons of mass destruction. Cultural Studies ↔ Critical Methodologies, 4: 152–197.
Head ML, Holman L, Lanfear R, Kahn AT, and Jennions MD. 2015. The extent and consequences of p-hacking in science. PLoS Biology, 13: e1002106.
Hermann T, Hunt A, and Neuhoff JG (Editors). 2011. The sonification handbook. Logos, Berlin, Germany.
Hicks D, Wouters P, Waltman L, de Rijcke S, and Rafols I. 2015. The Leiden Manifesto for research metrics. Nature, 520: 429–431.
Higgins K. 2016. Post-truth: a guide for the perplexed. Nature, 540: 9.
Howard PN, and Kollanyi B. 2016. Bots, #StrongerIn, and #Brexit: computational propaganda during the UK-EU referendum. arXiv:1606.06356.
Ioannidis JPA. 2012. Why science is not necessarily self-correcting. Perspectives on Psychological Science, 7: 645–654.
Jeschke JM. 2014. General hypotheses in invasion ecology. Diversity and Distributions, 20: 1229–1234.
Jeschke JM, Gómez Aparicio L, Haider S, Heger T, Lortie CJ, Pyšek P, et al. 2012. Support for major hypotheses in invasion biology is uneven and declining. NeoBiota, 14: 1–20.
Jeschke JM, Kaushal SS, and Tockner K. 2016. Diversifying skills and promoting teamwork in science. Eos, 97.
Kaushal SS, and Jeschke JM. 2013. Collegiality versus competition: how metrics shape scientific communities. BioScience, 63: 155–156.
Kearns CE, Glantz SA, and Schmidt LA. 2015. Sugar industry influence on the scientific agenda of the National Institute of Dental Research’s 1971 National Caries Program: a historical analysis of internal documents. PLoS Medicine, 12: e1001798.
Keim D, Kohlhammer J, Ellis G, and Mansmann F. 2010. Mastering the information age: solving problems with visual analytics. Eurographics Association, Goslar, Germany.
Kitcher P. 2011. Science in a democratic society. Prometheus, Amherst, New York.
Kleinman DL, and Suryanarayanan S. 2012. Dying bees and the social production of ignorance. Science, Technology, & Human Values, 38: 492–517.
Kollanyi B, Howard PN, and Woolley SC. 2016. Bots and automation over Twitter during the first U.S. presidential debate. Data Memo 2016.1. Project on Computational Propaganda, Oxford, UK.
Kraker P, Leony D, Reinhardt W, and Beham G. 2011. The case for an open science in technology enhanced learning. International Journal of Technology Enhanced Learning, 3: 643–654.
Kreiß C. 2015. Gekaufte Forschung: Wissenschaft im Dienste der Konzerne. Europa Verlag, Berlin, Germany.
Kupferschmidt K. 2018. Tide of lies. Science, 361: 636–641.
Lawrence PA. 2007. The mismeasurement of science. Current Biology, 17: R583–R585.
Lehrer J. 2010. The truth wears off. New Yorker, 13 December. pp. 52–57.
Lexchin J. 2012. Those who have the gold make the evidence: how the pharmaceutical industry biases the outcomes of clinical trials of medications. Science and Engineering Ethics, 18: 247–261.
Lexchin J, Bero L, Djubegovic B, and Clark O. 2003. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ, 326: 1167–1170.
Lindenmayer D, and Scheele B. 2017. Do not publish: limiting open-access information on rare and endangered species will help to protect them. Science, 356: 800–801.
Mazor T, Doropoulos C, Schwarzmueller F, Gladish DW, Kumaran N, Merker K, et al. 2018. Global mismatch of policy and research on drivers of biodiversity loss. Nature Ecology & Evolution, 2: 1071–1074.
McNeil M. 2013. Between a rock and a hard place: the deficit model, the diffusion model and publics in STS. Science as Culture, 22: 589–608.
Merton RK. 1968. The Matthew effect in science. Science, 159: 56–63.
Millgram E. 2015. The great endarkenment: philosophy for an age of hyperspecialization. Oxford University Press, Oxford, UK.
Mirowski P. 2018. The future(s) of open science. Social Studies of Science, 48: 171–203.
Moon K, and Blackman D. 2014. A guide to understanding social science research for natural scientists. Conservation Biology, 28: 1167–1177.
Muller JZ. 2018. The tyranny of metrics. Princeton University Press, Princeton, New Jersey.
Nagel J. 2014. Knowledge: a very short introduction. Oxford University Press, Oxford, UK.
Naisbitt J. 1982. Megatrends: ten new directions transforming our lives. Warner Books, New York, New York.
Nakagawa S, Samarasinghe G, Haddaway NR, Westgate MJ, O’Dea RE, Noble DWA, et al. 2019. Research weaving: visualizing the future of research synthesis. Trends in Ecology & Evolution, 34: 224–238.
Neuroskeptic. 2012. The nine circles of scientific hell. Perspectives on Psychological Science, 7: 643–644.
Nisbet MC, and Scheufele DA. 2009. What’s next for science communication? Promising directions and lingering distractions. American Journal of Botany, 96: 1767–1778.
Nowotny H, Scott P, and Gibbons M. 2003. Introduction: ‘Mode 2’ revisited: the new production of knowledge. Minerva, 41: 179–194.
O’Doherty KC, Christofides E, Yen J, Bentzen HB, Burke W, Hallowell N, et al. 2016. If you build it, they will come: unintended future uses of organised health data collections. BMC Medical Ethics, 17: 54.
OECD. 2017. Main science and technology indicators. OECD Science, Technology and R&D Statistics.
Ohm P. 2009. Broken promises of privacy: responding to the surprising failure of anonymization. UCLA Law Review, 57: 1701–1777.
Open Science Collaboration. 2015. Estimating the reproducibility of psychological science. Science, 349: aac4716.
Oreskes N, and Conway EM. 2010. Merchants of doubt. Bloomsbury Press, New York, New York.
Pärtel M, Szava-Kovats R, and Zobel M. 2011. Dark diversity: shedding light on absent species. Trends in Ecology & Evolution, 26: 124–128.
Perc M. 2014. The Matthew effect in empirical data. Journal of the Royal Society Interface, 11: 20140378.
Pielke RA Jr. 2007. The honest broker: making sense of science in policy and politics. Cambridge University Press, Cambridge, UK.
Pinker S. 1997. How the mind works. Norton, New York, New York.
Plavén-Sigray P, Matheson GJ, Schiffler BC, and Thompson WH. 2017. The readability of scientific texts is decreasing over time. eLife, 6: e27725.
Prinz F, Schlange T, and Asadullah K. 2011. Believe it or not: how much can we rely on published data on potential drug targets? Nature Reviews Drug Discovery, 10: 712.
Proctor RN. 2016. Climate change in the age of ignorance. New York Times, 20 November. p. SR4.
Proctor RN, and Schiebinger L (Editors). 2008. Agnotology: the making and unmaking of ignorance. Stanford University Press, Stanford, California.
Putnam RD. 2000. Bowling alone: the collapse and revival of American community. Simon and Schuster, New York, New York. 546 p.
Resnik DB. 2006. The price of truth: how money affects the norms of science. Oxford University Press, Oxford, UK.
Rowley J. 2007. The wisdom hierarchy: representations of the DIKW hierarchy. Journal of Information Science, 33: 163–180.
Schooler J. 2011. Unpublished results hide the decline effect. Nature, 470: 437.
Schulz W. 2012. Changes in knowledge about and perception of civics and citizenship over a ten-year period: comparing CIVED 1999 and ICCS 2009. Paper presented at the European Conference on Educational Research (ECER), Cádiz, Spain, 18–21 September 2012 [online]: Available from https://iccs.acer.edu.au/files/ECER1-SchulzW-CivicEdChanges.pdf.
Schulz W, Ainley J, Fraillon J, Kerr D, and Losito B. 2010. ICCS 2009 International Report: civic knowledge, attitudes, and engagement among lower-secondary school students in 38 countries. International Association for the Evaluation of Educational Achievement (IEA), Amsterdam, the Netherlands. 313 p. [online]: Available from http://www.iea.nl/fileadmin/user_upload/Publications/Electronic_versions/ICCS_2009_International_Report.pdf.
Sijtsma K. 2016. Playing with data—or how to discourage questionable research practices and stimulate researchers to do things right. Psychometrika, 81: 1–15.
Smith B, Baron N, English C, Galindo H, Goldman E, McLeod K, et al. 2013. COMPASS: navigating the rules of scientific engagement. PLoS Biology, 11: e1001552.
Tydecks L, Jeschke JM, Wolf M, Singer G, and Tockner K. 2018. Spatial and topical imbalances in biodiversity research. PLoS ONE, 13: e0199327.
Ungar S. 2008. Ignorance as an under-identified social problem. The British Journal of Sociology, 59: 301–326.
Van den Berg H. 2014. How the culture of economics stops economists from studying group behavior and the development of social cultures. World Economic Review, 3: 53–68.
Viner K. 2016. How technology disrupted the truth. The Guardian [online]: Available from https://www.theguardian.com/media/2016/jul/12/how-technology-disrupted-the-truth.
Weingart P. 2005. Impact of bibliometrics upon the science system: inadvertent consequences? Scientometrics, 62: 117–131.
Wilkinson MD, Dumontier M, Aalbersberg IJ, Appleton G, Axton M, Baak A, et al. 2016. The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3: 160018.
Wilson KA, Auerbach NA, Sam K, Magini AG, Moss ASL, Langhans SD, et al. 2016. Conservation research is not happening where it is most needed. PLoS Biology, 14: e1002413.
Wynne B. 1992. Misunderstood misunderstanding: social identities and public uptake of science. Public Understanding of Science, 1: 281–304.
Yeh ET. 2016. ‘How can experience of local residents be “knowledge”?’ Challenges in interdisciplinary climate change research. Area, 48: 34–40.
Young-McCaughan S, Rich IM, Lindsay GC, and Bertram KA. 2002. The Department of Defense Congressionally Directed Medical Research Program: innovations in the federal funding of biomedical research. Clinical Cancer Research, 8: 957–962.

Supplementary material

Supplementary Material 1 (PDF / 479 KB)

Information & Authors

Information

Published In

cover image FACETS
FACETS
Volume 4Number 1June 2019
Pages: 423 - 441
Editor: Nicole L. Klenk

History

Received: 1 February 2019
Accepted: 7 May 2019
Version of record online: 26 August 2019

Data Availability Statement

All relevant data are within the paper and in the Supplementary Material

Key Words

  1. Agnotology
  2. knowledge-ignorance paradox
  3. open science
  4. reproducibility
  5. research biases
  6. Scientific tower of Babel

Sections

Subjects

Authors

Affiliations

Jonathan M. Jeschke [email protected]
Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB), Müggelseedamm 310, 12587 Berlin, Germany
Department of Biology, Chemistry, and Pharmacy, Institute of Biology, Freie Universität Berlin, Königin-Luise-Str. 1-3, 14195 Berlin, Germany
Berlin-Brandenburg Institute of Advanced Biodiversity Research (BBIB), Altensteinstr. 34, 14195 Berlin, Germany
Sophie Lokatis
Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB), Müggelseedamm 310, 12587 Berlin, Germany
Department of Biology, Chemistry, and Pharmacy, Institute of Biology, Freie Universität Berlin, Königin-Luise-Str. 1-3, 14195 Berlin, Germany
Berlin-Brandenburg Institute of Advanced Biodiversity Research (BBIB), Altensteinstr. 34, 14195 Berlin, Germany
Isabelle Bartram
Department of Biology, Chemistry, and Pharmacy, Institute of Biology, Freie Universität Berlin, Königin-Luise-Str. 1-3, 14195 Berlin, Germany
Klement Tockner
Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB), Müggelseedamm 310, 12587 Berlin, Germany
Department of Biology, Chemistry, and Pharmacy, Institute of Biology, Freie Universität Berlin, Königin-Luise-Str. 1-3, 14195 Berlin, Germany
Austrian Science Fund (FWF), Sensengasse 1, 1090 Vienna, Austria

Author Contributions

JMJ and KT conceived and designed the study.
SL performed the experiments/collected the data.
All analyzed and interpreted the data.
JMJ and KT contributed resources.
All drafted or revised the manuscript.

Competing Interests

The authors have declared that no competing interests exist.

Metrics & Citations

Metrics

Other Metrics

Citations

Cite As

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

1. A conceptual classification scheme of invasion science
2. Stroke Is Not an Accident: An Integrative Review on the Use of the Term Cerebrovascular Accident
3. Bionovelty and ecological restoration
4. Die Bibliothek des biologischen, technischen und kulturellen Wissens – Warum brauchen wir eine integrierte Sammlungsinfrastruktur?
5. Building an atlas of knowledge for invasion biology and beyond! 2nd enKORE-INAS Workshop
6. Hypotheses in urban ecology: building a common knowledge base
7. The shape-shifting form of UK floodplains: Fusing analysis of the territorially constructed with analysis of natural terrain processes
8. Transience of public attention in conservation science
9. GitHub as an open electronic laboratory notebook for real-time sharing of knowledge and collaboration
10. Toward a cohesive understanding of ecological complexity
11. The rise of the Functional Response in invasion science: a systematic review
12. Foresight science in conservation: Tools, barriers, and mainstreaming opportunities
13. Design for Science: Proposing an Interactive Circular 2-Level Algorithm
14. Ecological Restoration in Support of Sustainability Transitions: Repairing the Planet in the Anthropocene
15. Zensur
16. Batrachochytrium salamandrivorans’ Amphibian Host Species and Invasion Range
17. Truths of the Riverscape: Moving beyond command-and-control to geomorphologically informed nature-based river management
18. Towards evidence‐based conservation of subterranean ecosystems
19. SKG4EOSC - Scholarly Knowledge Graphs for EOSC: Establishing a backbone of knowledge graphs for FAIR Scholarly Information in EOSC
20. Be careful with abbreviations
21. Vertrauen und Verantwortung
22. Predation
23. Biological Invasions: Impact and Management
24. Defining discovery: Is Google Scholar a discovery platform? An essay on the need for a new approach to scholarly discovery
25. Defining discovery: Is Google Scholar a discovery platform? An essay on the need for a new approach to scholarly discovery
26. Open science saves lives: lessons from the COVID-19 pandemic
27. The dark art of interpretation in geomorphology
28. Some reflections on current invasion science and perspectives for an exciting future
29. Better Writing in Scientific Publications Builds Reader Confidence and Understanding
30. The Hierarchy-of-Hypotheses Approach: A Synthesis Method for Enhancing Theory Development in Ecology and Evolution
31. A cultural framework for Indigenous, Local, and Science knowledge systems in ecology and natural resource management
32. Zensur
33. Fundamental research questions in subterranean biology
34. Open Science Saves Lives: Lessons from the COVID-19 Pandemic
35. The growth of acronyms in the scientific literature
36. Open Knowledge Maps: Visuelle Literatursuche basierend auf den Prinzipien von Open Science
37. Do cancer stem cells exist? A pilot study combining a systematic review with the hierarchy-of-hypotheses approach

View Options

View options

PDF

View PDF

Media

Media

Other

Tables

Share Options

Share

Share the article link

Share on social media