The accepted route for the dissemination of analysis findings is thru their publication in peer-reviewed analysis articles. this can be a superb system in several regards – alternative researchers within the field are given a chance to assess whether or not the strategies used were applicable, the experiments well-controlled, the interpretation of the results per the info, which the study very adds to this understanding.
But for all its blessings, pre-publication peer-review doesn’t tell us one easy thing: will the experiments be replicated? All too typically, researchers from alternative labs have difficulties replicating revealed findings. Usually, this can be owing to refined variations within the methodology used between labs. Less frequently, the revealed information could have arisen from an honest mistake; a malfunctioning piece of apparatus say, or a forgotten step throughout the statistical analysis. Sadly, there’s a 3rd risk, that despite its rarity, gets all the attention: the spectre of fraud.
Peer-review isn’t a safe-guard against fraud, as exemplified by the appalling case of Diederik Stapel. Currently, one’s scientific productivity is gauged by publication output, and as such, there’s huge pressure to publish work in fine quality journals. One unfortunate consequence of this can be that researchers (and editors) are reluctant to retract revealed work when errors (innocent or otherwise) return to light-weight. The competitive ethos of ‘publish or perish’, combined with the absence of adequate checks and balances, has allowed the foremost corrupt and eager to commit fraudulent acts.
There are alternative issues with this system. Currently, it is many years between a completely unique discovery and its eventual publication, and therefore the vast size of the literature defeats makes an attempt to stay abreast of developments in divergent fields. worry of being ‘scooped’ causes several scientists to become secretive regarding findings, impeding progress.
I propose a radical solution: the whole abolition of analysis papers.
The nature of publication is counter to the terribly concept of science as a method and not a product. an alternate approach may entail the submission of all novel analysis findings to on-line databases, with credit being given to the first contributors – given that alternative establishments are ready to replicate the results. Hence, it might be in everyone’s interest to supply detailed methodology, and support alternative researchers in makes an attempt to copy their work. Unless the work is shown to be reproducible, it might wither and die on the wiki-vine.
Clearly, this doesn’t apply to any or all fields. Not everybody has access to a particle accelerator of the dimensions of the big Hadron Collider, and lots of techniques are highly specialised. In these cases, the researchers would be expected to demonstrate their findings to colleagues among the sector.
In this method, analysis would become a a lot of collaborative effort, and fledging lab heads would be higher ready to compete with their competent peers. Journals would publish periodic review articles primarily based on confirmed findings within the database, written by researchers among every field.
The advantages of such a system are considerable. information would become an open resource, with authors a lot of willing to gift at conferences. Scientists would be rewarded for his or her continual analysis outcome instead of punctuated papers, raising morale, and information reliability would be assured. Finally, shorter author lists would build individual contributions a lot of clear. we have a tendency to should acknowledge an antiquated system for what it’s, and start to implement the mandatory changes.