
Modern scientific publishing is often portrayed as a neutral system advancing knowledge through merit and debate, but in practice it operates as a competitive economy of prestige, metrics, and scarcity. Academic survival increasingly depends on publishing frequently and rapidly in a narrow set of high-impact journals, shaping not only how research is evaluated but also which questions are asked and which ideas are abandoned before discussion. In this environment, conformity is often rewarded over originality, as researchers—especially those early in their careers—are encouraged to align with dominant frameworks endorsed by senior figures. Rather than fostering intellectual risk and innovation, the system favors safe, fundable, and repetitive lines of work, ultimately constraining the creative and critical development of science.
This dynamic is commonly summarized by the phrase “publish or perish.” More than a slogan, it captures a structural condition of contemporary academia in which publication has shifted from a means of communicating knowledge to an end in itself. Within this system, novelty is often conflated with trend alignment, peer review is entangled with power relations, and intellectual risk-taking is discouraged. As a result, entire research agendas can disappear—not because they are wrong, but because they are inconvenient, unfashionable, or difficult to place.
Ecology offers a particularly revealing case study of how systemic failures in scientific publishing manifest in practice. The discipline has become increasingly fragmented into micro-paradigms, each with its own journals, methods, and conceptual boundaries. While specialization has undoubtedly generated valuable insights, it has also weakened the integrative perspective required to address large-scale ecological problems. This fragmentation is occurring at precisely the moment when ecological knowledge has never been more urgently needed.
Over recent decades, science and society have undergone a profound process of ecologization, reshaping economic thought, public awareness, research priorities, and education (Odum, 1977; 1997; Semeniuk, 2012; Fernández Mora et al., 2021). In this context, ecologization refers to the growing recognition that ecological principles—such as limits to growth, interdependence, feedbacks, and system dynamics—are fundamental for understanding and governing human activities. It reflects a shift away from viewing nature as an external backdrop to economic and social systems, toward recognizing ecosystems as integral, constraining, and co-evolving components of those systems. As a result, understanding ecosystems is no longer a purely academic exercise; it is central to confronting climate change, biodiversity loss, and the long-term viability of human societies. In this context, highlighting the growing importance of the challenges ecology must confront is essential for grasping the broader implications—both positive and negative—of the “publish or perish” ethos. Rather than tracing the many causes of ecologization in detail, our focus is on what this shift implies for those producing ecological knowledge: the responsibility borne by researchers, and by the systems that govern what is published, has become immense.
To make this responsibility concrete, it is helpful to briefly clarify what ecology is concerned with. Ecology studies how living organisms interact with one another and with their physical environment, focusing on flows of energy and matter, feedbacks, limits, and the conditions that allow systems to remain stable over time. Because human societies now influence these processes at planetary scale, ecological research no longer speaks only to “nature,” but to the foundations of economic activity, social organization, and long-term human survival.
Ecology: A Science Without a Unifying Paradigm
In ecology, these tensions are particularly evident. For decades, authors such as Simberloff (1980) and Lewin (1983) have noted that the discipline lacks a unifying paradigm. Instead, there exists a network of thought schools functioning as closed micro-paradigms, resistant to external ideas.
This has created a situation of increasing fragmentation, where each group protects its own conceptual framework, and integrative dialogue rarely occurs. Recent studies, such as those by Scheiner (2013) and Riera et al. (2018), confirm that this situation is not only persistent but worsening.
The consequences are concerning: ideas are recycled under new labels, “safe” topics that guarantee publication and funding are prioritized, critical thinking is relegated to the margins of the system, and the epistemological continuity between classical and modern publications is disrupted (Belovsky et al., 2004). At the same time, ecology has undergone a process of mathematization—i.e., a move away from the naturalist perspective referenced by Margalef in the previous section—that resembles a propagandistic use of mathematics (Koblitz, 1981). This approach appears to be failing (Pilkey & Pilkey-Jarvis, 2006) and is tautological: “this result, without field data support, was obtained mathematically, and is therefore reliable because it was obtained mathematically.” Conversely, according to Fawcett and Higginson (2012), ecology and evolution articles receive 28% fewer citations per additional equation per page in the main text, although they tend to be cited more frequently by other theoretical articles. However, this increase comes at the cost of a strong decline in citations of non-theoretical articles (35% fewer citations per additional equation per page). The main side effect of this trend is a weakening of the link between theoretical and empirical ecology, hindering effective management and conservation of flora and fauna (Angilletta & Sears, 2011; Joseph et al., 2013).
From this perspective, the responsibility of researchers publishing in ecology is not merely to add incremental findings to the literature, but to produce and communicate knowledge that can meaningfully inform how societies understand and navigate ecological limits. Two widely cited statements help illustrate what is at stake.
First, Boulding’s observation that “anyone who believes that exponential economic growth can continue indefinitely in a finite world is either crazy or an economist” (Boulding, 1966; cited in Cairns, 2004, p. 50) captures a core ecological insight: biophysical limits are unavoidable. When ecological research fails to engage with this reality—or is sidelined because it challenges dominant economic assumptions—it undermines its own societal relevance.
Second, the longstanding conceptual divide between ecology and economics further highlights the problem. In ecological models, socioeconomic forces are often treated as external “disturbances,” while in neoclassical economics, ecological processes are relegated to “externalities” (O’Neill and Kahn, 2000; Carpintero, 2007). These mutually isolated abstractions may have been defensible when human impacts were localized, but they became untenable once human activity reached a global scale in the second half of the twentieth century.
Together, these statements point to a deeper responsibility: ecological researchers operate at the interface between natural systems and human decision-making, yet the current publishing system often rewards narrow specialization over integrative, system-level thinking. When publication incentives discourage work that bridges disciplines, challenges dominant paradigms, or addresses uncomfortable limits, they do not merely shape careers—they shape which ecological insights enter public and political debate, and which do not.
What, then, should be done, and what can history teach us about responding to ecological limits? The two statements above underscore a shared lesson: when societies treat biophysical constraints as abstractions—whether as “externalities” or as problems deferred to the future—they risk systemic failure.
A historical analogy can be found in Europe before 1492, a period marked by resource conflicts, famine, epidemics, and low productivity. Europeans largely perceived their world as a closed system, bordered by distant regions such as the Far East and what they believed to be a mysterious and “untamed” Africa. Within this closed worldview, social and ecological pressures accumulated. In thermodynamic terms, entropy could not decrease, consistent with the second law, helping to explain why European societies repeatedly approached collapse. The opening of Europe to the so-called New World fundamentally altered this trajectory by transforming a closed system into an open one.
A similar logic applies today on a planetary scale. If “Europe” is replaced by “planet Earth” and oceanic exploration by space exploration, the analogy becomes clear. As Tsiolkovsky observed, “A planet is the cradle of the mind, but one cannot live in the cradle forever.” Addressing ecological limits on Earth may therefore depend on expanding the horizons—conceptual and material—within which humanity operates.
This perspective reinforces the responsibility of ecological research. Future human survival, whether on Earth or beyond it, will depend on understanding how ecosystems regulate themselves and fail. While physics, chemistry, and mathematics can enable exploration, only ecology can provide the knowledge needed to design self-regulating habitats and sustainable systems. This makes the production of robust, integrative ecological knowledge far more consequential than the mere accumulation of publications.
Has the “publish or perish” culture succeeded in producing ecosystem models robust enough to meet these monumental challenges? Evidence suggests it has not. One of the most ambitious attempts to translate ecological theory into practice was Biosphere 2, a sealed, human-made ecosystem constructed in the early 1990s in the Arizona desert and preceded by smaller-scale trials sometimes referred to as Biosphere 1. The project aimed to test whether a closed ecological system—containing forests, oceans, soils, microbes, crops, and humans—could remain self-sustaining over long periods. Despite drawing on the most advanced knowledge available at the time in ecosystem physiology, biogeochemistry, and adaptive structural design, the experiment failed dramatically. Oxygen levels dropped to dangerous levels, agricultural yields were insufficient to feed the inhabitants, carbon dioxide fluctuated unpredictably, and microbial activity in soils consumed oxygen faster than anticipated (Cohen and Tilman, 1996). Social tensions among the crew further revealed how tightly coupled ecological and human systems are—an interaction poorly captured by existing models (Stewart et al., 2013; Zimmer, 2019).
These failures were not the result of technical incompetence, but of overconfidence in simplified and fragmented ecological understanding. Complex feedbacks between soils, microbes, plants, atmosphere, and human behavior proved far more difficult to predict than models suggested. The lesson is sobering: if even carefully designed, well-funded experiments based on leading ecological research could not sustain a small group of humans in a controlled environment, it raises serious questions about whether the prevailing research culture is generating the integrative, system-level knowledge required to confront planetary-scale challenges.
The Illusion of Fairness: Science, Politics, and Alliances
A partial conclusion from the previous section is that something unusual is happening in ecology. The difficulties encountered in closed-system experiments such as Biosphere 2 did not arise from a lack of data or technical sophistication, but from the absence of an integrated theoretical framework capable of linking ecological processes across scales. What these failures exposed was not a single missing parameter, but a deeper fragmentation of ecological knowledge itself.
Once a discipline characterized by rapid conceptual development and coherent theoretical growth, ecology has increasingly become a patchwork of competing proposals. Many of these generate new concepts and hypotheses, yet few mature into unifying theories capable of explaining ecosystem function in a general way. Instead of converging toward shared foundations—as has occurred in other mature sciences—ecological research often expands laterally, producing an overabundance of partially connected ideas. Margalef (1993) anticipated this concern: he emphasized the need for a common theoretical framework and for grounding ecological theory in careful observation of real systems. He argued that the naturalist perspective, which integrates empirical field knowledge with conceptual modeling, is essential for meaningful ecological synthesis. It is for this reason that ecologists who resist simply following prevailing trends continue to highlight two related problems: the absence of a common theoretical superstructure and the progressive devaluation of the naturalist perspective (Margalef, 1993, p. 17).
It is often assumed that science represents an objective search for truth, guided solely by logic and method. In reality, it resembles a political system, where alliances with influential figures often outweigh innovative ideas. This dynamic has created an “academic aristocracy,” in which titles and projects are passed down through fellowships, collaborations, or massive co-authorships.
An early example of a situation increasingly common today is described by Cook (1977) and revolves around the founding article of trophodynamics—the study of energy flow through ecosystems (Lindeman, 1942; cited 5,419 times as of December 25, 2025). Trophodynamics lies at the heart of orthodox ecosystem theory, providing a framework for understanding how energy captured by primary producers moves through consumers and decomposers, ultimately shaping ecosystem structure and function.
Remarkably, Lindeman’s seminal paper was initially rejected by the journal Ecology based on the opinion of two reviewers. One suggested that “if Dr. Lindeman could set this article aside for ten years, then publish it and see how it looks in light of what we hope will be the accumulation of limnological information; he might congratulate himself for having postponed its publication” (Cook, 1977).
The article was ultimately accepted due to the influence of George Evelyn Hutchinson, often called the “father of modern ecology.” Hutchinson was a highly respected ecologist at Yale University whose work shaped limnology, systems ecology, and theoretical approaches to biodiversity. His editorial mediation and judgment carried exceptional weight in the field, allowing Lindeman’s innovative ideas to reach publication despite initial skepticism.
In the article’s appendix, Hutchinson reflected on Lindeman’s intentions and contributions:
“While this, his sixth full article, was in press, Raymond Lindeman passed away after a long illness on June 29, 1942, at the age of twenty-seven (…) Knowing that the life of a man, at best, is too short to conduct intensive studies at more than a few localities, and before completing the manuscript, never to return to the field, he wanted others to think in the same terms he had found so stimulating, and to collect material that would confirm, expand, or correct his theoretical conclusions (…) it is this work to which we must turn as the principal contribution of one of the most creative and generous minds ever devoted to ecological science.”
Contextualized, this quote highlights both Lindeman’s foresight and the systemic fragility of innovation in science: without Hutchinson’s judgment and intervention, one of ecology’s foundational papers might have remained unpublished or delayed for years. By pushing Lindeman’s work through, Hutchinson ensured that the concept of energy flow became central to ecosystem theory, influencing generations of ecologists, guiding empirical studies, and establishing a framework that allowed ecology to evolve from descriptive natural history into a predictive, systems-oriented science. The episode illustrates how individual editorial decisions—and the presence of supportive mentors—can have lasting effects on the trajectory of a discipline.
Sometimes, manuscript authors can only wonder whether reviewers are also authors themselves, or whether, once some authors become reviewers, they effectively transfer themselves through a wormhole into a parallel universe entirely devoid of professional empathy—understood here as the capacity to situate oneself within the epistemological and fieldwork context in which the manuscript’s authors produced their work.
However, the post-“Lindeman case” situation has become increasingly evident in several areas of science. Consider, for example, the field of Nuclear Magnetic Resonance (NMR) and functional Magnetic Resonance Imaging (fMRI). Long before Fourier transform NMR became a cornerstone of chemistry and medicine, Richard R. Ernst faced a difficult path to publication. His groundbreaking work applying Fourier transform techniques to magnetic resonance — the very approach that underpins modern NMR spectroscopy and fMRI — was initially rejected twice by the Journal of Chemical Physics, as early reviewers failed to recognize its potential. Undeterred, Ernst published the work elsewhere, laying the foundation for a revolution in molecular imaging and analysis. Decades later, in recognition of the transformative impact of this research, he was awarded the 1991 Nobel Prize in Chemistry (Ernst & Anderson, 1966). Ernst’s experience illustrates how even highly influential scientific breakthroughs can encounter initial skepticism, highlighting how traditional peer-review systems may inadvertently suppress innovation.
A parallel example comes from theoretical physics. In the 1960s, Peter Higgs proposed a mechanism explaining how elementary particles acquire mass, predicting the existence of a particle later called the Higgs boson. His early manuscript faced rejection from Physics Letters before being accepted by Physical Review Letters after revisions that clarified its theoretical implications. Decades later, the Higgs boson was experimentally confirmed at CERN, and Higgs shared the 2013 Nobel Prize in Physics with François Englert (Higgs, 1964). This case similarly demonstrates that even groundbreaking theoretical work can be underestimated initially, yet ultimately reshape our understanding of the universe.
Instances of delayed recognition are not limited to historical milestones. More recently, in October 2022, the authors of this contribution submitted a constructive critique of an article previously published in a high-impact journal. In this instance, the editorial process required that the original authors review the critique before the journal would consider it for publication. While the manuscript was eventually published in another venue (Riera et al., 2024), the episode underscores how editorial practices can, intentionally or not, impose barriers to open scientific discussion. Collectively, these examples illustrate that the challenge of recognizing and supporting high-risk, high-reward research is not just historical: it remains a relevant concern for contemporary peer review and highlights the potential value of mechanisms — including AI-assisted evaluation tools — that could help identify innovative contributions that might otherwise be overlooked.
In the words of the aphorism cited by Newton from Aristotle: “Amicus Plato, sed magis amica veritas” (Plato is my friend, but truth is a greater friend). The problem today is that scientists frequently choose the “friend Plato” over the uncomfortable “truth.” This choice can entail exhausting investments of time and effort to overcome publication obstacles that should not exist, and provoke conflicts with colleagues when one does not fully align with their positions.
The Editorial Model: Exclusivity, Prestige, and Bias
The editorial system exacerbates these problems. Although chief editors appear autonomous, they respond to institutional structures that validate the status quo. This has transformed the most influential scientific journals into elitist spaces, where publication becomes a status symbol rather than a mechanism for disseminating transformative knowledge (Desrochers et al. 2018).
The situation is further complicated by the proliferation of Open Access journals, which in many cases respond more to commercial than scientific interests. As Judson (2004) points out, this model has opened the door to malpractice, manipulated results, and a publishing industry that thrives on the desperate need to publish.
Some of these journals function as “Veblen goods”: the more exclusive (i.e., low probability of acceptance) a journal is—higher rejection rates (Aarssen et al. 2008; Sugimoto et al. 2013), more famous names on the editorial board—the greater its symbolic value, and therefore the more time and money authors are willing to invest to publish in it. In this sense, they behave inversely to ordinary goods, for which scarcity generally reduces practical demand rather than increasing it. Publication in such journals is “academic glory,” even if content relevance or quality is compromised, and the proposals contained in the manuscripts often follow well-trodden paths.It is increasingly likely that research with the potential to transform ecology is being published in lower-profile journals rather than in high-impact ones, leading to reduced visibility and a slowdown in scientific progress. This problem is compounded by economic and geographic barriers: in many parts of the world, especially where open-access article processing charges are unaffordable, researchers are effectively excluded from prestige journals that dominate readership in Europe and North America. As a result, high-quality ecological research remains under-read, under-cited, or entirely overlooked, reinforcing inequities and narrowing the knowledge base at a time when ecology urgently requires diverse, global perspectives.
A System That Discourages Innovation
In this context, innovation can be discouraging and counterproductive for building a successful academic career, since “the nails that stick out (i.e., the status quo) get hammered down” (recall the “Lindeman case” above). The peer-review system functions reasonably well in times of stability but becomes an obstacle when ideas challenge the established order. Editorial review can act as a symbolic guillotine for risky proposals, even if they are valuable.
Rejection is often justified with phrases such as “we feel this topic is not of interest to our readers.” This type of reasoning, closer to biased divination than empirical analysis, clashes with science’s demand for objectivity. Meanwhile, hyperproduction continues: approximately 2.5 million scientific articles are published each year (Benedictus et al. 2016), many signed by dozens of co-authors, of whom only a few truly master the content. The accelerated pace and constant pressure hinder deep learning and the development of critical thinking among young scientists.
Conclusion: Toward a Freer and More Critical Science
Science cannot advance if it becomes a closed system, where recognition depends more on networks than on content. Ecology, like many other disciplines, needs to reclaim dialogue, openness, and intellectual audacity. This requires rethinking the mechanisms for evaluating, publishing, and validating knowledge.
Artificial intelligence can be a powerful ally, but human will is also necessary to reform the rules of the game. Only then can science fulfill its true purpose: generating relevant, innovative, and transformative knowledge, regardless of whom it may challenge.
[Disclaimer: The content in this RSS feed is automatically fetched from external sources. All trademarks, images, and opinions belong to their respective owners. We are not responsible for the accuracy or reliability of third-party content.]
Source link
