It made me laugh when I read the MIQE guidelines not long after this comic came out:
The nomenclature describing the fractional PCR cycle used for quantification is inconsistent, with threshold cycle (Ct), crossing point (Cp),and take-off point (TOP) currently used in the literature ... we propose the use of quantification cycle (Cq)
ADD REPLY
• link
updated 4.5 years ago by
Ram
44k
•
written 12.4 years ago by
dominic
▴
20
Not only is the bed format 0-based, it's also "half-open", meaning the start position is inclusive, but the end position is not.
So if your region starts at position 100 and ends at 101 using standard 1-based coordinates with both start and end inclusive (ie it's two bases long), when you convert it to 0-based half-open coords for bed format the region now starts at 99 but it still ends at 101!
ADD REPLY
• link
updated 4.5 years ago by
Ram
44k
•
written 13.7 years ago by
Nina
▴
400
0
Entering edit mode
Yeah, the people who conceived of that were utterly brilliant usability specialists.
YES, THAT IS an interesting feature when, as a bioinformatican, you're working with C-arrays , you want to define an empty interval, an insertion point, etc..
Gene annotation stored in an excel file and find out that some HUGO gene names have been hacked by Excel. SEPT9 become sept-9. Conclusion Do not use the .xls format to store your data.
Listen people saying this eternal mistake "Hey these two sequences are 50% homologs"
This is a popular one, dec1 is another well known example. But you can actually tell Excel not to do that auto correction. Since you most often get the data from biologist who may have treated the data in Excel already better use an another ID column and not the gene name column if that is available (you often receive both anyway). These errors can even occur in databases that you download data from or which are used for annotation, so it is good to check.
I feel like a lot of "stupid mistakes" revolve around betrayed trust and false assumptions
For example:
Trusting that a downloaded file is actually fully downloaded
Trusting that an aligner will accept a list of query files instead of just taking the first and ignoring the rest (quiz: which ones am I talking about?)
Assuming that the quality scores in a FASTQ file are from a great Sanger-encoded run instead of a very poor Illumina-1.3 run
ADD REPLY
• link
updated 4.5 years ago by
Ram
44k
•
written 13.5 years ago by
Yuri
★
1.7k
0
Entering edit mode
Re #2 - I think I couldn't make Trimmomatic to take all adapters I gave him for removal from my fastq files. I then switched to bbduk (great tool!) to remove my adapters, never skipped anything in my list.
If you forgive an attempt to be somewhat provocative, my two favorite mistakes are:
Letting academics build software
Academics are in the need to publish papers, and one easy way to do that is to implement an algorithm, demonstrate that it works (more or less), and type it up in a manuscript. BT,DT. But robust and useful software requires a bit more than that, as evidenced by the sad state of affairs in typical bioinformatics software (I think I've managed to crash every de novo assembler I've tried, for instance. Not to mention countless hours spent trying - often in vain - to get software to compile and run). Unfortunately, you don't get a lot of academic credit for improved installation proceedures, testing, software manuals, or especially, debugging of complicated errors. Much better and productive to move on to the next publishable implementation.
Letting academics build infrastructure
Same argument as above, really. Academics are eager to apply to construct research infrastructures, but of course they aren't all that interested in doing old and boring stuff. So although today's needs might be satisfied by a $300 FTP server, they will usually start conjecturing about tomorrow's needs instead, and embark on ambitious, blue sky stuff that might result in papers, but not in actually useful tools. And even if you get a useful database or web application up and running (and published), there is little incentive to update or improve it, and it is usually left to bitrot, while the authors go off in search of the next publication.
Yeah I don't know what why it is so hard for me to remember all the great bioinformatics software that has come from industry, like uhh Eland, or the great standards that have come from industry, like Phred-64 FASTQ.
To be clear, it's not a problem with academics themselves (after all, I'm one), just that the incentives are all wrong...
ADD REPLY
• link
updated 4.5 years ago by
Ram
44k
•
written 13.7 years ago by
Ketil
4.1k
5
Entering edit mode
I am fine with point 2, but I have to disagree with 1. Your de novo assembler example is actually not a good one. De novo assembly is very complicated and highly data dependent. I doubt any assemblers work for any data sets, no matter developed by academia or by professional programmers.
ADD REPLY
• link
updated 4.5 years ago by
Ram
44k
•
written 12.6 years ago by
lh3
33k
2
Entering edit mode
Out of the (relatively few) tools I have experience with, bowtie/tophat/cufflinks and also fastqc are the exceptions in terms of documentation, UI, maintenance, non-brittleness.
ADD REPLY
• link
updated 4.5 years ago by
Ram
44k
•
written 12.6 years ago by
bw.
▴
260
1
Entering edit mode
I always wonder if they have ever check the program/code that come with paper. In one paper, they hardcode the input file in code, make me waste a whole afternoon to figure out what's the hell wrong with it.
ADD REPLY
• link
updated 4.5 years ago by
Ram
44k
•
written 13.6 years ago by
Tg
▴
320
0
Entering edit mode
@Jeremy: I'm not so sure industry is much better, and it's possible that academia is the democracy of development - worst, except for the others. Also, a lot of industry software are add-ons, designed to sell something else. FWIW, Newbler seems to be one of the better assemblers out there, and CLC is at least half-decent as an analysis platform for non-informaticians.
ADD REPLY
• link
updated 4.5 years ago by
Ram
44k
•
written 13.6 years ago by
Ketil
4.1k
0
Entering edit mode
What about citations of your paper describing the software? The more people are able to use your software, the more cited it will be. That sounds like an incentive to me, and I try to see it that way when I write code (but I don't publish much code, it's mostly quite ad-hoc stuff).
Ketil repeatedly says in the post that academics don't have incentive to improve their tools, as it's more beneficial to move on to other publications. I say the incentive are citations of your method. Not to mention I regularly see academic software where they care about the things he says academics don't care about. So why would somebody say there are no incentives for improvements, when I can clearly see them and point them out to you?
You do have a point. However, citations are quite rare. One common problem with bioinformatics tools is that people use but seldom cite them properly. Imagine something like bioawk - it would be day to day usage for quite a few people, but it's not seen as something cite-worthy. Good tools have this habit of melding into the background that they don't stand out. Journals are changing this practice now though, with initiatives like STAR methods.
The problem with citing bioawk is that there is no publication to cite. Other tools may not have such problem. But it is true that even I fail to cite some packages I use, usually those that are not required to replicate some results, but are rather just a convenience.
STAR methods looks like a good initiative, too bad I will never send anything to Cell Press, on principle. They care about money above everything else - they are the only publisher that asked me to pay for a COVID-19 paper, at the peak of the pandemic no less.
IMHO being off by one is the emperor of all bioinformatics mistakes - it rules them all - and probably causes tens of millions of dollars in wasted effort
Re-inventing the wheel. So often did I have to debug (or just replace) a bad implementation of a fasta-parser when BioPython/BioPerl have perfect implementations, I don't understand why no-one bothers to use them. 10 minutes in Google can save you 2 days of work and other people a week of work (you save 2 days of programming, they save a week of understanding your program to find the bug)
As the nim library docs say, "The great thing about re-inventing the wheel is that you can get a round one."
My main reason for reinventing the wheel is that I want to use much more powerful and general language: Python instead of R. Of course, if the stuff I needed was already in Python/Pandas it would be a different thing entirely.
I fully agree, re-inventing the wheel is so tempting. We are way too eager to write a few lines of code each time. Plus, because you may have convinced yourself that you can resolve the code in 15 minutes, you don't bother about writing any documentation. In short, there is a very large tendency to re-invent the wheel... many, many times!
Using other peoples code is all fun and games, until you realise that your package has 106 dependencies, and you only use 1 function from each. Each of those dependencies has its own dependencies, depends on a particular version of gcc (but not the same one as the others), doesn't play nice with some common system used on other peoples systems...
Run a batch BLAST job and forgetting to put the -o something.out option. Then switching off the monitor and coming the next day to see a bunch of characters in my terminal
tar -zxvf without checking the tar file before, I have decompressed thousands of files in my current directory assuming they came in their own folder.
I gave my Amazon EC2 password to someone in my group who wanted to run something quickly (estimated cost, $2). I received the bill 2 months later: $156. This person forgot to close the instance. This is a 8 months story and I'm still waiting for my reimbursement... Conclusion: don't trust colleagues!
I'll offer this one, which is a bit on the general side: Deletion of data that appear to serve no relevance from the computational side, but which have importance to the biology/biologist. Often, this arises from a lack of clear communication between the two individuals/teams as to what everything means, what it exactly means and why it is relevant to the process being developed.
having manual components to an analysis pipeline (editing data sets running scripts manually)
Not dealing with error conditions at all. This is one thing that I really noticed when I started with bioinformatics; code that would just merrily continue when it hit incorrect data and output gibberish or fail far away from the bad data. A debugging nightmare.
Not testing edge and corner cases for input data
Assuming that your input data is sane; I've run into all sorts of inconsistency issues with public data sets (i.e. protein domains at positions off the end of the protein, etc). Usually fixed promptly if you complain but you've got to find them first.
I often encounter problems related to the fact the computer scientists index their arrays starting with 0, while biologists index their sequences starting with 1. Simple concept that drives the noobs mad and even trips up more experienced scientists every once in a while.
Do pathways statistics or gene set enrichment statistics and then represent the list of gene sets as a valuable result, instead using that statistics just as a means to decide which pathways need to be evaluated.
(This is bad for many reasons for instance because the statistical contribution of a key regulatory gene in a pathway is equal to that of 1 out 7 iso-enzymes that catalyze a non-relevant side reaction, and because the significance of a pathway changes when you add a few non-relevant genes, and also because we have many overlapping pathways).
Another typical mistake is to solve problems that nobody has.
I would define a stupid mistake as falling prey to a trivial but catastrophic pitfall, an error in judgment is more due a fundamental lack of understanding or willful ignorance
No, I think it is actually wrong to publish a list of pathways without further judgement. I think not doing the judgement is a mistake. But I have to admit that I don't really understand your examples. So maybe my English is not good enough to understand the finesses of the difference between poor judgement and stupid mistakes.
One mistake: not looking to see that the 0x4 bit in the bitflag column of a SAM (or BAM) file indicates the entry is mapped. RNAME, CIGAR, and POS may be set to something non-null (an actual string!) but these are not meaningful if the 0x4 flag says the read is unmapped.
So this is a (very) late reply, but in case it's still helpful or someone comes across this question like I did, rm -ir will ask before deleting files. Maybe a little annoying to type y a hundred times, but better to do that than lose all your data to a mistyped glob IMHO.
Has saved me quite a bit of headache. The capital I prompts only when you remove more than three files - good when you accidentally type rm some_directory/ * (notice the space)
I was about to add this one myself. It's bitten me a couple of times.
ADD REPLY
• link
updated 4.5 years ago by
Ram
44k
•
written 12.6 years ago by
Travis
★
2.8k
1
Entering edit mode
:) I did the same stupid thing many time!! I lost weeks of works by one click!
ADD REPLY
• link
updated 4.5 years ago by
Ram
44k
•
written 12.3 years ago by
Mchimich
▴
320
1
Entering edit mode
I too have had that moment of dread when I realized I typed rm * /folder versus rm /folder/*! Check out some of the solutions on this forum page, specifically trash-cli. You can set up a trash folder so after deleting files they are not completely gone and can be restored if needed. You would have to manually empty the trash folder or set up a cron job to do so on a regular basis, but this may help circumvent the nightmares listed here!
ADD REPLY
• link
updated 4.5 years ago by
Ram
44k
•
written 9.3 years ago by
alolex
▴
960
0
Entering edit mode
I was just deleting some unnecessary files from a dir and managed to have a space and an asterisk at the end of the rm command. As soon as I realized what was happening I hit ctrl-c, but important files without backups were already gone. Oh well, it will only take like 2-3 weeks to reproduce them. Also time to edit .bashrc following Philipp's post..
Once I did something very similar: I deleted all files and subdirectories in a directory of which I thought I have them in duplicate. Shortly after I realized I was inside a symbolic link directory and I was deleting the original data....
I wouldn't say it's stupid, but I think a very common mistake is to not correct for batch effects in high-throughput data.
Batch effects can (best-case) hide the real effect that you're looking for, or (worst-case) make it look like your variable of interest is contributing to your findings when it's actually an artifact.
Leek + Irizarry et al. have a sobering review on this here.
Running the bwa/GATK pipeline with a corrupt/incompletely generated bwa index of hg19. Everything still aligned, but one of 2 mates would have its strand set incorrectly. Other than the insert size distribution, everything seemed normal, until the TableRecalibration step downshifted all quality scores significantly and then UnifiedGenotyper called 0 SNPs. 1st time I've seen a problem with step 1 of a pipeline not become obvious until step 5+.
ADD COMMENT
• link
updated 4.5 years ago by
Ram
44k
•
written 12.6 years ago by
bw.
▴
260
But you should be careful. Doing that will misplace the indel position in a microsatellite.
ADD REPLY
• link
updated 4.5 years ago by
Ram
44k
•
written 12.1 years ago by
lh3
33k
0
Entering edit mode
If I understand you correctly, you are saying that this will inflate the number of variants, since many have ambiguous positions? Interesting - do aligners generally guarantee that such ambiguous variants are consistently placed for forward and reverse reads?
ADD REPLY
• link
updated 4.5 years ago by
Ram
44k
•
written 12.1 years ago by
Ketil
4.1k
2
Entering edit mode
BWA always places the indel at the beginning of a microsatellite. If you align the read to the rev-complemented ref, the indel will be at the end. Many indel callers assume the bwa behavior, though there are also tools to left-align indels.
ADD REPLY
• link
updated 4.5 years ago by
Ram
44k
•
written 12.1 years ago by
lh3
33k
0
Entering edit mode
Isn't this just POS+length(SEQ)? I'm having doubts now...
that only applies if the sequence contains only matches or mismatches, this means edit strings that are composed of a number followed by M (like 76M) . For all other alignments you will need to parse the CIGAR string and build the end coordinate from the start + numbers in the edit string.
Phew... I'm glad I only have matches and mismatches, so I fall in the easy category :-) Thanks a lot for adding this information, this can be a big trap!
I made one a few months ago. I launched a heavy process in a pay-per-use cluster, it was running for one week. I thought, 6 pennies/hr cannot be too much money. I received a bill for $832 usd. I'm not using this cluster again unless I estimate the total cost of the process.
edit: the price is per core
ADD COMMENT
• link
updated 4.5 years ago by
Ram
44k
•
written 12.3 years ago by
JC
13k
0
Entering edit mode
By my count, 6 pennies per hour is $1.44 a day or about $10 a week. How did you get $832?
Possibly: implementing methods that magically generate p-values from non-replicated RNA-seq experiments, possibly as a result to pressure from 'experimentalists'. I really would like to know the history behind their implementation (where they forced by reviewers, or by other groups?). Now we have to explain why those p-values are bogus, and why there are so little significantly differentially expressed genes detected in a non-replicated analysis.
I spent hours to implement R parallel script to fully exploit 64 cores and 250 GB RAM of the server lab. Thus, really proud of myself, I run it on my 4 cores and 8 GB RAM desktop pc. Boom.
I remember once I put a coma in the wrong bracket in some matrix assignment on genotype data and after running the code my laptop froze so hard I had to shut it down by holding the power button. I remember the laptop needed two reboots to get back to normal. After going through the code to see what happened, I realized I assigned a matrix to each cell of itself (or something like that), basically creating an object somewhere around 250 GB in size. My laptop had 8 GB of RAM and about the same of swap (older Ubuntu) and it run out of both within an eye-blink :D
Good thought but, on a server (I should have told earlier, I commit this on server sessions), your process id's in one login are not known in other login/session.
I had another good one recently. I was executing an untested bash script for generation of output from two input files on our cluster. I just let it run over night. When I checked in the morning, it had generated about 40 TB worth of output (expectation was about 20 MB). There was a tiny spelling mistake that led to an infinite loop. Oops. I was lucky to check it when I did because there was still a few TB space left so at least other jobs didn't get killed because of it..
how about forgetting to have a full loaded battery for your wireless keyboard and/or mouse near you, and the current one inside is running out of power?
I have spent hours, in repeated occasions, looking for a mysterious error in a perl script that at the end was simply a = instead of a == within an IF statement.
Another recurrent mistake: not documenting what I did and what those scripts do in the belief that everything is so intuitive, organised, simple and natural that it won't be necessary. Then, sometime after, I have to spend hours trying to guess what all that mess was.
I'm sure someone has mentioned this, but running something like:
bcftools view -S something.txt in.bcf > in.bcf
Goodbye in.bcf, I hardly knew ye.
Also memory management for Java programs on an HPC (beagle I'm looking at you). I love spending hours trying to guess exactly how much memory the JVM is going to use and toggling -Xmx/-Xms flags.
I feel your pain. A handy way to avoid this is to set -o nocolobber, which prevents you from redirecting into existing files. See: https://mywiki.wooledge.org/NoClobber
ADD COMMENT
• link
updated 4.5 years ago by
Ram
44k
•
written 13.7 years ago by
Russh
★
1.2k
1
Entering edit mode
I have an opposite mistake. I accidentally replace a string in the whole document and something I didn't want to replace gets replaced. Now I visually select the area where I want to replace...
Making claims without experimental validation. Especially involving studies utilizing multiplexed technologies such as microarrays and high-throughput sequencing.
ADD COMMENT
• link
updated 4.5 years ago by
Ram
44k
•
written 13.5 years ago by
Gww
★
2.7k
Some really great comments here, nice to know that such things happen to all genii ;). I have to say my most painful moments relate to my assumption that data obtained elsewhere is correct in every way. I also remember early in my career, using PDB files and realising that sometimes, chains are represented more than once, thus when manually checking calculations involving atomic coordinates, being utterly perplexed and wanting to break my computer. Oh the joys of Bioinformatics.
Assuming that the gene IDs in "knownGenes.gtf" from UCSC are actually gene IDs. Instead they just put the transcript ID as the gene ID.
This just caused me a bit of pain when doing read counting at the gene level. Basically, any constitutive exon in a gene with multiple splice forms was ignored because all the reads in that exon were treated as ambiguous.
I found myself guilty iterating through the loop and storing data let's say every 100 iterations... but not storing the very last bit of the data (ie lines 10001 to 10026) at the very end.
How about writing a tool and being convinced it works perfectly, so you start testing it on a complete dataset instead of testing it first on a subset and finding out after it ran for an hour or so that you made a tiny mistake somewhere. Sooo much time wasted that I'll never get back :P
Spending a few months to find an interesting correlation in data and then presenting to the lab that sequenced to find out they changed coverage on the last few samples!
This one was really good, embarking on sudo yum update when there was lots of stuff to update and swap space was very low. Ended up with a situation much like this. Took me good four hours before I saw my desktop again.
Another fun one is when you develop a pipeline with a small test set thinking speed over all and then you increase your test set size and realize that you're creating TBs of temp data and utilizing hundreds of GBs of RAM :)
Sometimes, you need to distinguish between "atgc" and "ATGC". But in most cases, they mean the same thing. So always convert any string you met to uppercase if you do not need to distinguish.
I feel spending my whole Ph.D. debugging this. Hope it helps for others.
Running statistical analyses with no understanding of the tests your using, why you're using them, when it's appropriate to use them, how your data needs to be formated, normalize, or scaled to use them... but hey, you made a volcano plot... so it must be publication quality. right? read the damn paper...
The most pervasive and damaging problem afflicting bioinformatics studies is the failure to curate covariate information, resulting in problems of separation and/or complete separation.
This problem is particularly sinister because this frequently occurs long before the bioinformatician is consulted on the project, meaning the project already has unnecessary limitations before the data scientist/statistician/what-have-you receives the data.
At minimum, this will limit statistical power, in certain cases, it means there is a confound that can't be mitigated without meta-analytics or extreme ends. In my experience this affects, oh I don't know, >70% of labs, including world-class, well-known labs, who don't retain a statistician.
Points 1 and 2 are not really mistakes, I never delete older versions. Instead, I create files with timestamps. Unless you know you'll never need the older version (and still have code/documentation to create the older version if necessary), deleting it is not a smart thing to do. I'd rather delete unzipped files after using them than delete the zip archive.
Points 1-4 are all about space saving. Of these, only #3 is problematic in all scenarios as no information is lost. #4 is probably specific to your way of using GATK.
Staying in the office all day
good way to boost reputation.
meta: should this Q be community-wiki?
what's the mean by said generate random genomic sites without avoiding masked (NNN) gaps? more detailed? do not understand.
see this?
http://genome.ucsc.edu/cgi-bin/das/hg19/dna?segment=chr1:1,10000
you wouldn't want to sample from it