Scientism Part II - Ethics, Morality, Corruption and Reproducibility
Scientism Part II - Ethics, Morality, Corruption and Reproducibility
More on the limits of not empiricism, but science
By Alexander Mills
••
scienceresearch
• 113 views
The Sacred Limits of Science: Why Our Greatest Tool Can't Be Our Only Guide
Science has given us antibiotics that save millions of lives, smartphones that connect us across continents, and climate models that help us understand our planet's future. It's humanity's most powerful tool for understanding the natural world—a systematic method that has revolutionized how we live, work, and see ourselves.
But here's what we rarely talk about: science, for all its power, has boundaries. And when we forget those boundaries—when we expect science to answer every question and solve every problem—we set ourselves up for disappointment, bad decisions, and a dangerous kind of intellectual overreach.
Understanding science's limits isn't about diminishing its value. It's about using this incredible tool wisely, ethically, and in harmony with other forms of human knowledge. Let's explore four critical limitations that every scientifically literate person should understand.
1. The Brutal Economics of Real Science
Pop culture gives us a romanticized view of scientific discovery: the lone genius having a eureka moment in their garage, the simple experiment that changes everything overnight. The reality is far messier and more expensive.
Real science requires enormous investments of time, money, and human resources. Consider the development of a single new drug: it typically costs between $1-3 billion and takes 10-15 years from laboratory to pharmacy shelf. That's not including the countless compounds that fail along the way, each representing millions in sunk costs.
The Hubble Space Telescope, which has revolutionized our understanding of the cosmos, cost 14 billion. Even smaller studies add up quickly—a well-designed clinical trial with a few hundred participants can easily cost millions.
This economic reality creates profound limitations:
Priority Distortion: Research priorities often reflect funding availability rather than human need. We have extensive studies on male-pattern baldness (funded by pharmaceutical companies) but limited research on diseases that primarily affect poor populations.
Temporal Constraints: The pressure to publish and secure continued funding can rush scientific processes that naturally require decades. Climate science, for instance, ideally needs century-long datasets, but policy decisions can't wait that long.
Resource Scarcity: Many important questions simply can't be studied because they're too expensive or take too long. We still don't fully understand the long-term effects of many chemicals in our environment because the studies would take generations to complete.
Access Inequality: Rich institutions and countries can fund cutting-edge research while poorer regions can't even afford to replicate existing studies. This creates blind spots in our scientific understanding of global problems.
The uncomfortable truth is that our scientific knowledge is always incomplete, always provisional, and always shaped by what we can afford to study. Recognizing this doesn't diminish science's value—it helps us make better decisions with incomplete information.
2. The Is-Ought Problem: Why Science Can't Tell Us What We Should Do
In the 1930s, philosopher A.J. Ayer articulated a problem that strikes at the heart of how we use scientific knowledge: you cannot derive moral prescriptions (what ought to be) from factual descriptions (what is). This "is-ought problem" reveals one of science's most fundamental limitations.
Science excels at describing reality: how neurons fire, why climates change, how economies grow. But it cannot, on its own, tell us whether these phenomena are good or bad, right or wrong, desirable or undesirable.
Consider these examples:
Environmental Policy: Science can tell us that burning fossil fuels increases atmospheric CO2 and raises global temperatures. It can model the likely consequences: rising sea levels, changing weather patterns, ecosystem disruption. But science cannot tell us whether we should prioritize economic growth over environmental protection, or how to fairly distribute the costs of climate action.
Genetic Engineering: Science can explain how CRISPR technology works and predict its effects on human DNA. It can demonstrate the potential to eliminate genetic diseases. But it cannot tell us whether we should edit human genes, which enhancements are ethical, or how to ensure equitable access to genetic therapies.
Social Policy: Science can study the effects of different educational approaches, welfare systems, or criminal justice policies. But it cannot tell us what kind of society we should create, what values we should prioritize, or how we should balance individual freedom against collective welfare.
The deeper issue: Even when we agree on basic values like "reducing suffering" or "promoting wellbeing," science often can't tell us how to achieve these goals. Different policies might reduce different types of suffering, help different groups of people, or promote wellbeing in different timeframes. Choosing between them requires moral judgment that goes beyond empirical evidence.
This limitation becomes dangerous when we pretend it doesn't exist—when politicians claim their policies are "purely scientific" or when experts suggest their recommendations are value-neutral. All policy recommendations, even those based on solid science, embed moral and political judgments about what matters most.
3. When Money Corrupts the Temple of Truth
Science depends on funding, and funding often comes with strings attached. When those strings are pulled by corporate interests, the results can be devastating for both scientific integrity and public trust.
The tobacco industry perfected this corruption decades ago. Despite internal documents showing they knew cigarettes caused cancer as early as the 1950s, tobacco companies funded studies designed to create doubt about smoking's health effects. They didn't need to prove cigarettes were safe—they just needed to muddy the waters enough to prevent decisive action.
This playbook has been used repeatedly:
Pharmaceutical Industry: Companies have suppressed unfavorable trial results, ghostwritten research papers, and designed studies to make their drugs look better than competitors. The opioid crisis was fueled partly by industry-funded research that downplayed addiction risks.
Chemical Industry: Studies on pesticides, plastics, and industrial chemicals are often funded by the companies that produce them. Independent research frequently finds more concerning effects than industry-sponsored studies.
Food Industry: Sugar industry-funded research in the 1960s helped shift blame for heart disease from sugar to fat, influencing dietary guidelines for decades. Modern food giants continue funding studies that question links between processed foods and health problems.
Climate Science: Fossil fuel companies have spent millions funding climate denial research and think tanks, creating artificial controversy about settled science.
The corruption takes many forms:
Study Design Bias: Framing research questions to favor desired outcomes
Publication Bias: Suppressing negative results while promoting positive ones
Researcher Capture: Building long-term relationships with scientists who become dependent on industry funding
Regulatory Capture: Using industry-friendly research to influence policy and regulation
The problem isn't that all industry-funded research is invalid—much of it follows rigorous scientific standards. The problem is that financial incentives can subtly (or not so subtly) influence every aspect of the research process, from what questions get asked to how results are interpreted and reported.
This creates a crisis of trust. When the public discovers that influential studies were industry-funded, they may lose faith in all scientific research. This skepticism, while sometimes warranted, can also lead to rejection of legitimate scientific findings that don't serve corporate interests.
4. The Reproducibility Crisis: When Science Can't Replicate Itself
Science's greatest strength is supposed to be its self-correcting nature. If a finding is real and important, other scientists should be able to replicate it. But over the past decade, researchers have discovered a troubling problem: many published studies can't be reproduced.
The numbers are sobering:
Psychology: Only 36% of studies could be successfully replicated in a major 2015 effort
Cancer research: Researchers could reproduce only 11% of landmark studies
Economics: Replication rates vary widely but often fall below 50%
Medicine: Many clinical trial results can't be reproduced by independent researchers
What's causing this crisis?
Statistical Gaming: Researchers manipulate data analysis to find "significant" results—a practice called p-hacking. By trying multiple statistical tests or excluding inconvenient data points, they can make random noise look like meaningful patterns.
Publication Bias: Journals prefer exciting, positive results over boring, negative ones. This creates pressure to find effects that aren't really there and to bury studies that show no effect.
Sample Size Problems: Many studies use too few participants to reliably detect real effects. Small studies are more likely to produce false positives and are less likely to replicate.
Career Incentives: Academic careers depend on publishing research in high-profile journals. This "publish or perish" culture encourages quantity over quality and novelty over rigor.
Complexity and Sloppiness: Modern research often involves complex statistical analyses and large datasets. Small errors in methodology or analysis can lead to dramatically different results.
The human cost: Medical treatments based on irreproducible research can harm patients. Educational policies based on flawed psychology studies can waste resources and hurt students. Economic policies based on unreliable research can affect millions of lives.
Recent examples of high-profile failures:
Studies claiming that certain foods dramatically extend lifespan
Research suggesting that subtle environmental cues can completely change behavior
Medical trials showing miraculous benefits from treatments that later proved ineffective
Economic studies claiming large effects from policies that subsequent research found had minimal impact
The reproducibility crisis doesn't mean all science is broken, but it does mean we need to be more humble about scientific claims, especially from single studies. The most reliable scientific knowledge comes from multiple independent studies that reach similar conclusions.
What This Means for How We Live
Understanding science's limitations doesn't mean embracing anti-intellectualism or rejecting evidence-based thinking. Instead, it means using science more wisely:
Intellectual Humility: Recognize that scientific knowledge is always provisional and incomplete. Today's scientific consensus might be tomorrow's outdated theory.
Multiple Sources of Wisdom: Combine scientific insights with philosophical reflection, practical experience, moral reasoning, and cultural wisdom. No single approach to knowledge has all the answers.
Follow the Money: Ask who funded research and what incentives might have influenced the results. Independent replication is worth more than a single well-funded study.
Wait for Replication: Be skeptical of exciting findings from single studies. The most reliable knowledge comes from multiple independent studies reaching similar conclusions.
Moral Responsibility: Remember that science can inform our values but can't determine them. We must take responsibility for the moral and political judgments that shape how we use scientific knowledge.
Proportional Confidence: Adjust your confidence in scientific claims based on the quality and quantity of evidence. Extraordinary claims require extraordinary evidence.
The Sacred Role of Science in Human Flourishing
Recognizing science's limits doesn't diminish its profound importance. Science has liberated us from countless superstitions, given us tools to understand and shape our world, and continues to reveal the deep patterns underlying reality.
But science works best when it knows its place—as one crucial voice in the human conversation about how to live well, not as the only voice that matters. When we treat science as infallible or assume it can answer every question, we actually diminish its power by asking it to do things it was never designed to do.
The future belongs not to those who worship science uncritically or reject it entirely, but to those who can skillfully integrate scientific insights with other forms of human wisdom. This integration requires intellectual humility, moral courage, and the recognition that truth is too important to be left to any single method of inquiry.
Science is humanity's most powerful tool for understanding the natural world. But like any tool, its value lies not in blind faith but in skillful use—knowing when to apply it, how to interpret its results, and where to look for wisdom when science reaches its limits.
In honoring both science's power and its boundaries, we create space for the full range of human knowledge to flourish. That's not anti-science—it's wisdom in action.