Hey!
I have been using featureCounts
many times in the past without any problem, but today for some unknown reasons I am experiencing an issue.
I am annotating a dataset of 18 .bam files (aligned to mm39) using the mm39 .gtf file from Ensembl.
After launching featureCounts
with multiple threads, it starts running regularly, as reported in the tab of Linux processes (using the top
command). However, after 2-3 min, it enters in a constant "stuck" state with a very low %CPU usage (2-3%). It's been like this even for up an hour without processing not even one file.
Has any of you experienced this problem before?
I have been trying to use different .gtf file but nothing works. The issue is not even fixed by removing and reinstalling Subread in my conda environment.
Thanks in advance :)
Does it work with one file? Try a couple. It is pretty fast so should not be an issue to do a couple of tries.
I tried even with only one file, but it did not work. I hypothesized a memory usage issue and even though on the external hard-disk I have more than 1TB available it still didn't work. I tried moving the files on my machine and running featureCounts, now it worked!
Using any program where a large amount of data needs to be read should never be used with an external disk. Unless you have thunderbolt 4 interfaces on your external drives that are being used.
Hell, this seems not featureCounts' problem. Do you have enough memory and swap space?