This is for either
A) a more biological paper, where software is a "means to an end"
B) a software paper where the software itself is the focus of the paper.
EDIT:
This is assuming that the code is made available to the reviewer.
This is for either
A) a more biological paper, where software is a "means to an end"
B) a software paper where the software itself is the focus of the paper.
EDIT:
This is assuming that the code is made available to the reviewer.
A brief google search reveals that code quality seems to mostly mean a combination of readability, robustness, extensibility and maintainability. It has less to do with efficiency. In that case (and if you define code quality this way, too), code quality may be frequently subjective. I would be cautious of talking about it. Even if the code is obviously in bad shape (e.g. full of global variables), I may not measure a the quality of manuscript with code quality. To a scientific program, the underlying algorithm is far more important. Code quality is more in the engineering aspect and is only a "good-to-have" thing. We can write bad-looking but efficient programs. And I have indeed seen popular programs with bad code quality (in my standard). We may argue they are not easy to read/use, but we cannot find good alternatives. In addition, many tools are published not for others to use, either intentionally or effectively. In this case, code quality is not important, either.
Code quality is like tiding up your home. Some people can live in total chaos, some people cannot tolerate a spec of dust. Whenever I look at code I wrote 3 months ago I say "I cannot believe I wrote such a crappy code!", but I understand that most people don't even care. Specially in science where most code is written for and by a single user. The real solution would be to have a simple "code quality metric".
To be honest, no. I'm not a computer scientist, so "code quality" to me is all about usability. I will comment on:
If the substance of what code produces is scientifically rigorous and has utility, then its style, readability, etc. is not that important to me. Furthermore, as long as source code is available, post-publication use, critique and extension will allow others to judge if the code is good or not.
for (1) I would ask the authors to make the source available as a supplementary file, or better, to publish it on github/sourceforge/etc... A workflow could be posted on myexperiment.org
for (2) I would suggest to have a look at BMC - "Open Research Computation" : Pre-submission guide for software article authors http://www.openresearchcomputation.com/authors/presubmissionguide
Is the software source code available on a public repository? * Please provide the URL for the public repository
Is the source code made available under an Open Source Initiative compliant license? * A list of OSI compliant licences is available at: http://www.opensource.org/licenses/category
Are project authors and contributors clearly defined, ideally through a Description of a Project [http://en.wikipedia.org/wiki/DOAP, http://trac.usefulinc.com/doap] document? * We recommend the use of the automatic DOAP generator such as those linked here:http://trac.usefulinc.com/doap/wiki/Generators
Documentation Source code documentation as well as instructions for use are expected to a high standard.
etc...
It is subjective, but there are things that are pretty widely recognized as poor practice. If I see a 30-line block of code with "A" hard-coded, then the same 30-line block for "G", then the same 30-line block for "T" , "G" (and dont forget lower-case) ... Well, the code may work fine, is that good code quality?
Reproducibility is one of the most important and fundamental components of science. The 'quality' of experimental tools are not. It is much more important to encourage that code necessary to reproduce an analysis be submitted than it is to require that code meet a certain standard of quality.
As a reviewer, do you consider the quality of the materials used in an experiment, assuming that different materials have been demonstrated to produce equivalent results? If so, this would provide an unnecessary barrier to science. (Does it matter if a spectrometer cuvette is made of plastic vs. glass? That a microscope was made by Leica vs brand X? That Galileo used a primitive telescope?)
The philosophy that "if your code is good enough to do the job, it is good enough to publish" is outlined in a 2010 Nature column by Nick Barnes: 'Publish your code, it's good enough!'.
For publishing code intended for use as software, it is appropriate to comment on the functionality of the code, but not its 'smell'. If it is published as open source, it is available for others to improve upon. Important advances can be made in a fraction of the time that it would take to produce high quality code, and many researchers do not have the time that would be required to cleaning up 'good enough' code.
Good points. I agree that things like brand of scope or cuvette material are not important. But to continue the analogy, if the authors invented their own spectrophotometer or built their own microscope to generate their results, I'd want to evaluate the quality of what they created. For code, this could just be unit tests -- doesn't have to look pretty, but needs to work. In other words, functional quality over aesthetic quality (or other kinds of quality as described ind @lh3's answer)
Interesting question. I usually take quality to mean efficiency and success in doing what the software was designed to do.
I never write about details regarding code quality as a reviewer, just as I don't question whether an animal study using 20 cages (5 control + 5 treatment 1; 5 control + 5 treatment 2) where run concurrently or successively. This is for paper type A, biological and using software as a tool. The exception to this would be a general statement that the methods used and experiments conducted are well suited to the questions of X that the researchers proposed to address, etc.
A known tool (BLAST, BOWTIE, GenePatterns, e.g.) generally need not be explained in terms of efficiency and success - unless they were applied for the wrong purpose. A new tool may be difficult to assess, say if the code is not submitted or made available to the reviewers.
As a reviewer, do you talk about the quality of writing in the Methods section of a paper? I think you should. Supporting code is no different. If it's unreadable, the method isn't documented.
Depends on the software. If the software is a use-me-as-i-am application code quality does not matter that much. If the focus of the software is that you can extend it or use parts of it elsewhere code quality gets a lot more important. Imagine environments like Cytoscape or Galaxy that are just horrible to extend with your individual solutions and problems because the code is a messy blob of characters. What would you choose? Clearly defined and well designed interfaces vs messy blob? In the review of such a paper code quality should be considered.
Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Most editors would never reject a paper based on the accusation of bad code - I mean, sure, as a reviewer you can talk about it - but that's probably the only influence you can. You hope that the authors will listen, go back and improve. They probably never will.
Are your experiments and experimental conditions ever a "means to an end"?
Your question seems to assume that code quality for means is not important... but that's just accidental, right?
I don't intend to imply that. I just guess that there my be different responses depending on those 2 cases.
@mndoci, I trust you comment is taken to be rhetorical and play-up the value of software in science, but clearly the answer to your question is "yes" - experiments are done with a goal in mind, they are not an end in themselves. See the definition of an experiment in any dictionary: