Hi,
I am running tophat and getting a lot of different errors, I could understand some of them, but this one, I don't know the reason. So, the error is
OSError: [Errno 2] No such file or directory: 'test_yo/tmp/left_kept_reads.m2g_um.candidates_and_unspl.bam'
This appears, after mapping when the reporting of output tracks is done. Command used :
tophat --bowtie1 -G /projects/grub/archive/Mus_musculus/UCSC/mm9/Annotation/Genes/genes.gtf -o test_yo /biodata/biodb/ABG/genomes/bowtie/mm9 test/test_data/reads_1.fq
Please let me know, if you encountered this or know a solution. Same question appears here but no answer yet.
My tophat.log :
[2012-04-16 18:49:10] Beginning TopHat run (v2.0.0)
-----------------------------------------------
[2012-04-16 18:49:10] Checking for Bowtie
Bowtie version: 0.12.7.0
[2012-04-16 18:49:10] Checking for Samtools
Samtools version: 0.1.17.0
[2012-04-16 18:49:10] Checking for Bowtie index files
[2012-04-16 18:49:10] Checking for reference FASTA file
Warning: Could not find FASTA file /biodata/biodb/ABG/genomes/bowtie/biodata/biodb/ABG/genomes/bowtie/mm9.fa
[2012-04-16 18:49:10] Reconstituting reference FASTA file from Bowtie index
Executing: /state/partition1/apps/bin/bowtie-inspect /biodata/biodb/ABG/genomes/bowtie/mm9 > test_yo/tmp/mm9.fa
[2012-04-16 18:51:12] Generating SAM header for /biodata/biodb/ABG/genomes/bowtie/mm9
format: fastq
quality scale: phred33 (default)
[2012-04-16 18:51:32] Reading known junctions from GTF file
[2012-04-16 18:51:35] Preparing reads
left reads: min. length=75, count=100
[2012-04-16 18:51:35] Creating transcriptome data files..
[2012-04-16 18:51:50] Building Bowtie index from genes.fa
[2012-04-16 18:56:57] Mapping left_kept_reads against transcriptome genes with Bowtie
[2012-04-16 18:56:58] Converting left_kept_reads.m2g to genomic coordinates (map2gtf)
[2012-04-16 18:57:00] Resuming TopHat pipeline with unmapped reads
[2012-04-16 18:57:00] Mapping left_kept_reads.m2g_um against mm9 with Bowtie
[2012-04-16 18:57:01] Mapping left_kept_reads.m2g_um_seg1 against mm9 with Bowtie (1/3)
[2012-04-16 18:57:02] Mapping left_kept_reads.m2g_um_seg2 against mm9 with Bowtie (2/3)
[2012-04-16 18:57:03] Mapping left_kept_reads.m2g_um_seg3 against mm9 with Bowtie (3/3)
[2012-04-16 18:57:04] Searching for junctions via segment mapping
[2012-04-16 18:58:38] Retrieving sequences for splices
[2012-04-16 19:00:14] Indexing splices
[2012-04-16 19:00:33] Mapping left_kept_reads.m2g_um_seg1 against segment_juncs with Bowtie (1/3)
[2012-04-16 19:00:41] Mapping left_kept_reads.m2g_um_seg2 against segment_juncs with Bowtie (2/3)
[2012-04-16 19:00:49] Mapping left_kept_reads.m2g_um_seg3 against segment_juncs with Bowtie (3/3)
[2012-04-16 19:00:57] Joining segment hits
[2012-04-16 19:02:31] Reporting output tracks
Traceback (most recent call last):
File "/share/apps/bin/tophat", line 3778, in ?
sys.exit(main())
File "/share/apps/bin/tophat", line 3754, in main
os.remove(m)
OSError: [Errno 2] No such file or directory: 'test_yo/tmp/left_kept_reads.m2g_um.candidates_and_unspl.bam'
If you want run.log, I can also provide that.
I can reproduce the error using bowtie2 (default in tophat2) as well and the test data used is from the tophat website.
Cheers
My run with tophat (v2.0.0 with bowtie2) coincides exactly with what your output shows, except that after
reporting output tracks
, its saysrun complete ... seconds elapsed
. So, I don't think there is any issue with tophat run until there. Which OS are you working in? Did you try copying your fasta to bowtie-index folder (with same name as bowtie-index)...?Also try running with the
--keep-tmp
option (just a thought as your error seems to be from the tmp directory created by tophat).Amazing Arun --keep-tmp solves it :)
I'm glad that fixed it! :)
Had the same issue and tophat didn't generate the accepted_hits.bam.. So I am running now with the --keep-tmp option! fingers crossed!
I am experiencing pretty much exactly this error. However, adding
--keep-tmp
did not solve the problem.I am running tophat on a campus cluster machine with shared filesystem, and I wonder if the tmp files are getting removed for some reason related to that.
My command line:
Output:
Version: tophat2.1.0.
Interestingly, most of the files that I process complete just fine. About 1/6 of them have the above error. When I rerun them, they generally get the error again, although once or twice on the third try they succeeded. So maybe something to do with running out of memory, somehow?
Any suggestions would be appreciated!
Jessica
Yeah, its possible. Can you try to run it on a local machine.
I am unable to help in this regard as I am not sure what is the exact problem.
I can't run it on a local machine until probably next week, but I'll try it and report back.
I did run it on the cluster with 8 instead of 12 processors. Result: all the files that had caused errors succeeded (yay). I concluded that I had solved the problem and moved on to the next set of files, using 8 processors each. Result: new error (though clearly similar):
So, again, an error about a missing file.
I'll report back when I have a chance to try these on a local machine.