Hi, I just encounter another problem, after I convert SRA file to FASTQ, and use fastQC to check the quality,
The program stuck at 95% for "read xxx sequences" for over an hour.
Anyone have the similar problem before?
Thanks
Hi, I just encounter another problem, after I convert SRA file to FASTQ, and use fastQC to check the quality,
The program stuck at 95% for "read xxx sequences" for over an hour.
Anyone have the similar problem before?
Thanks
I suffered the same problems.
Solution: I realized that the version i was using (and automatically downloaded with sudo apt-get install fastqc
) was 0.11.4 (0.11.7 has been released this year).
I have installed 0.11.5 referenced here, and completed my analysis without issues. Worth to try :) (I use ubuntu 16.04)
I've had this problem as well - I'm not sure what the cause of it is. However, an easy work around is to first uninstall FastQC - "sudo apt-get remove fastqc" in Ubuntu. Next, find and go into the fastqc directory - you can open the application from here without installing it. Just type "./fastqc" and it'll open. I've never had issues running it this way.
Running fastqc from the commandline in Linux (usually Ubuntu), I have often had problems when using larger fastq files or settings that use more resources, like --nogroup. The fastqc perl script that runs fastqc has a memory limit of 250Mb (Xmx250m). When I have problems I usually increase the amount of memory.
starting with sudo rights solved the problem for me
Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
FastQC may have run out of memory. How much RAM do you have and how big is the sequence file you are trying to QC?
I have 8Gb, the fastq file is 8.4Gb, it will use all of them?
Another dumb question, do most software in the RNA-seq pipeline require a large amount of RAM?
Thank you very much
You can check memory usage by e.g. top or htop in your terminal.
What OS are you using? @WouterDeCoster's suggestion will work for unix/OS X but not for Windows.
Generally memory usage will be proportional to the analysis you are doing/software you are using. There are specific examples (e.g. STAR aligner with human genome which needs 30+G of RAM for alignments) known to require significant RAM. Most de novo assemblers will need tens to hundreds of GB of RAM for very large datasets.
FastQC should use swap once it runs out of RAM. Is it possible that you may have a corrupt fastq file (based on the other question you had asked)? Did you check the size of the corresponding file on ENA?
I tracked the usage of mem and cpu, after reach the 95%, everything dropped to 0...
I am using Ubuntu. will check ENA to see whether the fastq file have problem
I am not sure what is the problem, on ENA website, the fastq bytes information is empty, I checked the read number and it is correct.
I download the SRA file from NCBI and use fastq-dump for conversion to fastq file.