You don't have to look far on the Internet to find people complaining about peer review and all of its frustrations and flaws. My peer review frustration of the day is when reviewers come back with "everyone knows this" or "this is the expected result" and don't back up the assertion with any form of reference or previous literature. If there is something that "everybody knows" AND it is actually backed up by experiment (and not something people just assume), then it should be easy to provide the authors with a reference to prior work. If the result is something people have assumed for years, but is not (yet) backed up by data in the literature, then the experiment is possibly worth publishing, assuming the methodology and analysis are sound. I find this situation annoying both when I am the author and also when I am one of the other referees and the editor sends everyone the reports (something I whole-heartedly support--a good way to help calibrate reviews, and I often learn something from reading the opinions of people with complementary expertise).
I see this most frequently when the work is interdisciplinary, or when someone publishing is new to the field. New researchers don't have the biases and inherited wisdom of their predecessors, and are in a good position to question assumptions. They may also be bringing new tricks to an older problem that illustrates things taken for granted. In my own reviews, I try to provide at least one reference when I comment that a result is not new, or is expected based on previous work. Yes, the authors should do a thorough literature search, but sometimes people miss things, and if a result really is widely known, it takes less then 10 minutes to pull up an appropriate reference.
In my own research, I find that pretty much whenever I move into a new area, there are things that "everybody knows" that follow most people's science intuition, but are completely unsupported experimentally. Sometimes those things are trivial, and no one really cares, other times they are foundational to interpreting results or designing experiments. Probably 9 times out of 10, the results of an experimental test will mostly align with the expectation. But the real fun comes when the results are completely unexpected, and that is why we do the work. Confirming or denying a hypothesis is what research should be all about. As a reviewer, call that 9 times out of 10 result incremental if you want to (which often it is), but don't say it isn't a new result, even it is something "everybody knows". Now we have experimental confirmation that "everybody knows" something that is actually correct.
An Open Letter to a Kondo Kultist
3 weeks ago