Unfortunately, the scientific community is also not immune to hypes and buzzwords. About 10 years ago, enhancers and cis-regulatory elements were the talk of the town, so many transcriptomic studies were published that looked at those particularly. Then it was all about alternative splicing and long non-coding RNAs and polymerase pausing etc. After that, everyone tried to include some, often far-fetched, relevance to SARS-Cov-2 in their papers and in recent years, of course, anything with AI, whether for classification/detection/segmentation etc., is sexy.
A reanalysis of published microarray data per se is not really intriguing for editors in terms of methodology - it is basically a 20-year-old technology. NGS-based results overshadow microarrays nowadays, unless you really need large sample numbers, also because the selection of probes already limits what you will be able to find.
If you have not generated the data yourself, you are also not the first to analyze it. So either your scope and hypothesis must be significantly different from the existing published results or your bioinformatic methodology must be novel. Without knowing any details about your manuscript, it to me does seem that you applied mostly existing methods. This probably makes the editors doubt that your study is sufficiently informative for the readers to be of interest.
Of utmost importance are of course the conclusions that you reached. If you discovered a truly novel disease mechanism, that may be really exciting, but would presumably need underpinning by wet lab experiments to get accepted. If you confirm existing knowledge and find those pathways enriched that are already known to be implicated in that disease, your study lacks novelty.
So start with the question: "What is the exciting finding that I do want to tell the other researchers about?" and then start rewriting your manuscript in a way that everything that underpins that finding is nicely presented, but also point out, where further research is needed. Add additional datasets to explore your finding further, e.g. in related diseases. Discuss how that finding adds to the existing knowledge and whether it is something that is potentially "druggable" and could help with treatment in the future. But also if it simplifies diagnosis or the management of the disease, it could already represent a significant contribution worth a publication. Good luck!
It is just an opinion about your work, and apparently without commissioning the reviews. I have read the dreaded words
It is not clear
in many reviews, enough so to learn that it only sometimes means something is wrong with my work. Instead, it often means that reviewers are not excited about the work, and the easiest way to express that lack of enthusiasm is to say that it isn't clear on its motivation, execution, or interpretation. Then I re-submit to a different journal or a different agency and get strong reviews.Even though I am an experimental biologist, you will not hear it from me that you need to add wet-lab experiments to your work. Of course it wouldn't hurt if you did, but people publish purely bioinformatics papers all the time. I think you need to make your presentation of results more exciting. That could mean: 1) highlighting your main finding; 2) finding a better selling point; 3) and yes, adding experiments.
My general advice is not to overthink the generic and unhelpful feedback such as the one you received, and certainly not to be brought down by it. Repackage the product and try again.
By doing the laboratory work.
Bioinformatics as the name implies is about understanding biology.
Computational methods we use are predictions about the system we're working on.
You have to show your predictions are what you say they're in vitro or in vivo.
If you're not experienced with the lab work try finding collaborators who have the labs and infrastructure.
I suggest that you don't restrict yourself to only data analysis and learn wet lab and biology.
Knowing the biology of what you're studying and how these data are generated would give you a better understanding and insight into your analyses and results.
Hard disagree.
I don't need to do a clinical trial or get preclinical patient-derived xenograft models for the GWAS+eQTL study that took me years to complete.
A lot of novel biology can be discovered and validated from computational methods performed on the wealth of existing datasets out there. The "laboratory work" has already been done; it has generated those datasets.
Computational analysis of data is not "predictions". In my sequencing reads, if I reliably see a transcript being detected; that's not a prediction. That's my data.
I agree with this comment. To generate such data, a previous experiment has already been done. Perhaps it could be reinforced with a quantitative analysis such as qPCR. But I think also, as colleagues say, if we want to highlight a "finding x" and we can validate it with other databases (generated from other experiments) in a coherent way, that is no reason not to consider it a complete study. Right?
Validation is always good, but it's typically a supplementary figure to support the finding. It does not really add to the story other than showing that it's robust. The critical part is to work out what the differential genes mean in terms of explaining the phenotype.
Hi everyone!
Thank you for all comments. You have helped me a lot and broadened my perspective. I am currently starting my thesis and I am learning something new every day, so all your suggestions and experience are welcome. I have noted down all your tips to apply them.
The disease of the study is Alzheimer's disease. My group specialises in ALS and AD.
It's a funny thing about magazines because I've had one publisher really like the type of study and others consider it to be preliminary. That makes me doubt, if some of them are only looking to publish massively, maybe they are not as interested in the quality of the study as others. It's a bit subjective and we depend on luck, isn't it?