Data don't lie. Human does.
Three typical cases and examples of how poorly designed project changes the truth.
A few weeks ago, I watched the documentary Bad Surgeon: Love Under the Knife (2023) on Netflix. The film explores the case of Paolo Macchiarini, once celebrated as a visionary surgeon and medical pioneer. Over time, however, his reputation collapsed as evidence emerged that he had manipulated research data and performed high-risk surgeries based on insufficiently validated evidence.
Part of the problem was that errors in his research were hidden. The conclusions he reported did not reflect the reality of the evidence because the necessary trials had not been conducted rigorously. Yet those findings were used to justify a series of experimental procedures on patients, leading to tragic outcomes.
This story is not only about one individual. It is an extreme case, but it illustrates a fundamental issue in science:
Data does not speak for itself. The quality of truth depends on how we design, collect, analyze, and interpret data.
In academic research especially in the social sciences and education data distortions rarely appear as misconduct like the one portrayed in the Netflix documentary. More often, they emerge as small methodological biases that accumulate over time. Yet even small distortions can lead to misleading conclusions and sometimes serious consequences.
The common misconception: “Numbers don’t lie”
The phrase sounds convincing. Methodologically, however, it is misleading.
Data itself does not lie. But research processes can produce biased data.
In research practice, distortions often emerge across three layers:
Research design
Analytical procedures
Interpretation of findings
A study can be ethically conducted and still produce misleading conclusions if one of these layers is flawed.
1. Weak research design: When the question is wrong, the data will be too
Research design determines whether data genuinely reflects the phenomenon under investigation. A poorly designed project can misdirect a study from the outset.
Consider a hypothetical example: a university surveys students about their satisfaction with an academic program. An online questionnaire is distributed, and 2,000 responses are collected. The results show that 85% of students report being satisfied.
Does this mean the program is successful? Not necessarily. Several methodological issues may be present:
Sampling bias.
Students who are dissatisfied may simply ignore the survey. They may feel disengaged or believe their feedback will not matter. Those who are satisfied may be more willing to respond.
Instrument bias.
If the questionnaire only asks questions such as “What do you like most about the program?” without including open-ended questions about concerns or dissatisfaction, the data will naturally skew toward positive responses.
Over-interpretation.
Students may report being “moderately satisfied,” yet still believe improvements are necessary. If leadership sees only the headline figure—85% satisfaction—important problems may be overlooked.
Before collecting data, researchers must ask themselves:
What exactly am I measuring?
How am I measuring it?
Which voices might be missing from this dataset?
2. Confirmation bias: Seeing only what we expect to see
One of the most pervasive cognitive traps in research is confirmation bias: the tendency to seek evidence that supports our assumptions while overlooking contradictory findings.
Imagine a study investigating whether meditation improves workplace productivity. A researcher tracks employees who practice meditation for eight weeks and observes a 10% increase in productivity. The study is published with the headline:
“Meditation improves workplace productivity.”
However, the design may contain serious weaknesses:
No control group.
Without comparing the meditation group to employees who did not meditate, we cannot determine whether meditation was the actual cause of the improvement.
Ignoring conflicting data.
Some participants may have experienced stress or reduced productivity due to the added discipline of daily meditation. If those results are excluded or overlooked, the conclusion becomes misleading.
Selective analysis.
Researchers may consciously or unconsciously choose statistical procedures that produce favorable results rather than examining the full range of analytical possibilities.
Methodologically, this is an issue of internal validity. Strategies to reduce confirmation bias include:
Incorporating control groups into research design
Pre-registering hypotheses when possible
Reporting all results, including those that contradict expectations
Working collaboratively so that colleagues can challenge assumptions
3. P-Hacking: When data is “manipulated” until results look significant
Another problem increasingly discussed in research methodology is p-hacking. This occurs when researchers repeatedly test different analytical approaches until they obtain a statistically significant result, typically when p < .05. The outcome appears statistically valid, but it may simply reflect random variation.
P-hacking is especially likely when:
A dataset contains many variables
Multiple analytical methods are possible
No pre-specified analysis plan exists
Consider another hypothetical study examining whether classical music improves memory performance.
Initially, the analysis finds no significant difference between participants who listened to classical music and those who did not. Instead of reporting this null finding, the researcher begins experimenting with alternative analyses:
Dividing participants into smaller subgroups (for example, only men, or only participants over 30)
Switching statistical methods (from t-tests to ANOVA to regression models)
Removing outliers until significance appears
Eventually, the researcher discovers that “men over 30 who listen to classical music show 15% better memory performance.”
The result looks compelling. But when enough analyses are attempted, statistically significant patterns will inevitably appear by chance. Not every result with p < .05 reflects a meaningful phenomenon.
Research integrity is not only an ethical issue
Research integrity is often framed as an issue of personal ethics. In reality, it is also a matter of methodological systems and research processes.
A rigorous study requires:
Clear research design
Transparent procedures
Honest analytical practices
Careful interpretation of results
Ethics alone does not guarantee reliable knowledge. Sound methodology does. And ultimately, responsible research requires both.



