Entering edit mode
9.4 years ago
Biomonika (Noolean)
3.2k
I am currently re-reviewing paper, where I now consider manuscript to be in a perfect shape. However, I still cannot run the whole pipeline although authors already significantly improved readme and provided test files. This paper has its merit even without the pipeline/code, but I am in doubts what to do about this. It should be noted that all code is published in github, so technically it's possible to troubleshoot it.
Thanks for sharing your experience.
The paper is about the software itself, or the software is just support to re-create their analysis, and the main point of the paper is something else?
What are the policies of the journal? Are they strict on requiring code? Or it is up to authors to decide?
Good point. If the paper is about a finding, then reproducing their analyses is less important (if you trust the explanation of methods). If the paper is advertising a tool then it very much has to work.
I have reviewed papers like that, in all cases I asked the authors to fix their code before giving my approval. If the paper is about the software, it should run at least with a minimal test. If the paper is about some biological insight based on the use of the software, it should be open-sourced and run for reproducible results.
Is it due to dependency issues or errors in the script?
If I use exact command that were provided, I get only help page and no errors (and hence no outputs). However, my question is rather philosophical :-)
Can the authors provide input and a makefile-like process to show data processing and analysis in a setting that reproduces what is in the paper? If you can't reproduce the result, that would seem problematic.
If it's not obvious how to run the pipeline that is on the authors. I would suggest that a major revision is necessary until it is obvious. They are shooting themselves in the foot by not providing a working example script.
You are doing the right thing. Too many reviewers fail to even check that software exists or is runnable.
There are different ways a software can be incomplete - perhaps some inputs are not recognized, parameters hard coded, some functionality not quite ready.
For me what marks a software unacceptable is when the program does not work with the test and example datasets. If someone can't get that right I have little trust that they did the right thing in a more complex scenario.
For now: ask for a ssh connection, migrate their input to a new directory and run the pipeline step by step.
They really need to include an accurate list of dependencies (OS, required libraries), and an automated build system is always nice. For example, in Python you can use virtualenv and pip to easily create a completely isolated development/testing environment.