Ethics and Standards in The Science

Scientific ethics is not a separate department — it is load-bearing infrastructure. From the design of a study to the reporting of its results, ethical standards govern what counts as valid knowledge and what disqualifies a finding before it ever reaches a journal. This page covers the foundational principles of scientific ethics, how oversight structures function in practice, where the real tensions live, and what separates genuine standards from their cosmetic imitations.


Definition and scope

Scientific ethics encompasses the principles and institutional mechanisms that govern how research is designed, conducted, reported, and used. The scope is broad by necessity: it runs from the treatment of human research subjects to data handling, from authorship attribution to conflicts of interest, and from peer review conduct to the responsible communication of findings to the public.

The U.S. federal framework anchors much of this landscape. The Office of Research Integrity (ORI), housed within the Department of Health and Human Services, defines research misconduct as fabrication, falsification, or plagiarism — the "FFP" triad — committed in proposing, performing, reviewing, or reporting research (ORI, Definition of Research Misconduct). Importantly, ORI's jurisdiction covers federally funded research, which means a significant portion of academic science in the United States falls within its purview.

Parallel frameworks operate internationally. The Committee on Publication Ethics (COPE) sets standards for journal editors and publishers across more than 12,000 member journals worldwide. The World Medical Association's Declaration of Helsinki remains the foundational document for ethical standards in human medical research, first adopted in 1964 and revised most recently in 2013.

The scope of scientific ethics, then, is as wide as science itself — and at The Science Authority, the same foundational standards apply whether the topic is molecular biology, environmental monitoring, or behavioral research.


Core mechanics or structure

The working architecture of scientific ethics operates through three interlocking systems: institutional oversight, publication standards, and legal or regulatory requirements.

Institutional Review Boards (IRBs) are the most visible mechanism for human subjects protection. Under the U.S. Common Rule (45 CFR Part 46), any research involving human subjects that receives federal funding must receive IRB review before it begins. IRBs evaluate risk-benefit ratios, informed consent procedures, and participant privacy protections.

Peer review serves as the publication-side checkpoint. Though imperfect, it represents the field's formal commitment to having claims evaluated by independent qualified experts before they enter the scientific record. The mechanics vary — single-blind, double-blind, open review — but the core function is the same: external validation that the methodology and conclusions hold up.

Data management standards form the third pillar. Funding bodies including the National Institutes of Health (NIH) require data management and sharing plans for all research receiving NIH funding as of January 2023. This requirement pushes researchers to document provenance, ensure reproducibility, and make underlying data accessible for verification.


Causal relationships or drivers

What causes ethical failures in science is not usually malice — it is pressure. The "publish or perish" incentive structure in academic institutions creates measurable distortions. A 2012 analysis published in PNAS by Fang, Steen, and Casadevall found that of 2,047 retracted biomedical and life-science articles examined, 67.4% were attributable to misconduct rather than error — fraud, duplicate publication, or plagiarism. The structural pressure to produce novel, positive results correlates with the incentive to cut corners.

Funding dynamics compound this. Researchers who obtain grants from interested parties — pharmaceutical companies, agricultural corporations, industry trade groups — face conflicts of interest that, even when disclosed, can subtly shape study design and outcome reporting. The National Academies of Sciences, Engineering, and Medicine has documented how undisclosed or unmanaged conflicts of interest undermine public trust in scientific findings.

Replication failure — the inability of independent labs to reproduce published results — has become a recognized systemic issue. The Open Science Collaboration's 2015 study in Science reproduced only 36 of 100 published psychological findings at approximately the same effect size, a result that reshaped how the field thinks about statistical power and publication bias.


Classification boundaries

Not every ethical failure is the same kind of failure. The classification matters for how institutions respond.

Research misconduct (fabrication, falsification, plagiarism) is the most serious category — it represents an intentional violation. ORI's definition explicitly excludes "honest error or differences of opinion," a boundary that has significant procedural importance during investigations.

Questionable Research Practices (QRPs) occupy the middle ground: practices that are not outright fraud but compromise the integrity of findings. These include p-hacking (running multiple statistical tests until a significant result appears), HARKing (Hypothesizing After Results are Known), selective outcome reporting, and undisclosed post-hoc design changes. QRPs are widespread, widely recognized, and largely unpoliced by formal mechanisms.

Ethical violations in research conduct — such as failure to obtain informed consent, breach of participant confidentiality, or inadequate animal welfare compliance — are governed by separate regulatory frameworks and may trigger legal consequences independent of publication ethics.

The boundary between QRPs and outright misconduct is genuinely contested. A researcher who decides after data collection to report only statistically significant secondary outcomes may believe they exercised scientific judgment; a reviewer examining the preregistration record may classify the same act as selective reporting.


Tradeoffs and tensions

The most honest observation about scientific ethics is that the field's core commitments sometimes pull against each other. Transparency and privacy are in genuine tension when sharing data means exposing identifiable participant information. Speed and rigor conflict when public health emergencies demand fast publication. Openness and intellectual property rights clash when researchers must choose between open data sharing and patent protection for applied findings.

The push for preregistration — publicly committing to hypotheses, methods, and analysis plans before data collection — directly addresses QRPs but imposes a rigidity that some researchers argue is incompatible with exploratory science, where the discovery is precisely the unexpected result.

Conflicts of interest present perhaps the deepest structural tension. The institutions best positioned to fund large-scale research are often those with direct financial stakes in the outcomes. The National Science Foundation's standards for disclosure require management of conflicts rather than elimination — an acknowledgment that the alternative (restricting industry-funded science entirely) would eliminate a substantial portion of applied research.


Common misconceptions

Misconception: Peer review certifies that a finding is true.
Peer review certifies that a manuscript met the threshold for publication in the opinion of typically 2 to 4 reviewers. Reviewers generally cannot access raw data, verify calculations independently, or replicate experiments. A published finding is a vetted claim, not a proven fact.

Misconception: Retraction means the authors committed fraud.
The majority of retractions result from error, not misconduct. A 2011 analysis in Infection and Immunity by Steen found that error accounted for a substantial portion of retractions across multiple journals — though the proportion varies significantly by journal and field.

Misconception: Disclosure eliminates conflicts of interest.
Disclosure is necessary but insufficient. Research published in JAMA Internal Medicine has shown that industry-sponsored clinical trials are more likely to report outcomes favorable to the sponsor even after disclosure requirements are met. Disclosure informs readers; it does not neutralize the underlying pressure.

Misconception: IRB approval means a study is ethical.
IRB approval means a study met minimum requirements for proceeding at the time of review, based on submitted protocols. It does not guarantee ethical conduct during the study or ethical treatment of results afterward.


Checklist or steps

The following represents the formal ethics compliance sequence for federally funded human subjects research in the United States, as established by 45 CFR Part 46 and NIH policy:

  1. Determine whether research involves human subjects under the regulatory definition — not all research on humans qualifies.
  2. Determine exempt, expedited, or full review category based on risk level and subject population.
  3. Submit IRB application including protocol, consent forms, recruitment materials, and data management plan.
  4. Obtain IRB approval before any participant contact or data collection begins.
  5. Register the trial or study in an appropriate public registry (ClinicalTrials.gov for clinical trials receiving FDA oversight).
  6. Conduct study per approved protocol — any changes require IRB amendment and re-approval.
  7. Submit continuing review at intervals specified by the IRB (commonly annually).
  8. Report adverse events or unanticipated problems to the IRB within required timeframes.
  9. Report results in compliance with applicable data sharing and results reporting requirements.
  10. Archive records for the period required by the funding agency and institutional policy (commonly 3 to 7 years post-study completion).

Reference table or matrix

Framework / Standard Governing Body Scope Key Document
Research misconduct definition (FFP) Office of Research Integrity (ORI) Federally funded US research 42 CFR Part 93
Human subjects protection DHHS / IRBs US federally funded human research 45 CFR Part 46
Clinical trial ethics World Medical Association International medical research Declaration of Helsinki (2013)
Publication ethics Committee on Publication Ethics Journals, editors, publishers COPE Core Practices
Data sharing policy National Institutes of Health NIH-funded research (2023+) NIH Data Management and Sharing Policy
Conflict of interest management National Science Foundation NSF-funded research NSF CFOI Policy
Responsible conduct of research training ORI Graduate students, postdocs, investigators in funded research ORI RCR Guide

References