"bowtie2 died with signal 9 (KILL)" error message, having trouble figuring out how to fix my script
2
0
Entering edit mode
5.9 years ago
meerapprasad ▴ 10

Hello! I am working on assembling a transcriptome of the terminal ganglion of the cricket (Gryllus bimaculatus). We are working on a non-model species and thus did not remove ribosomal contamination during library prep. We want to remove ribosomal contamination bioinformatically, and I am mapping our sequences, which have been trimmed for quality control with fastqc and rcorrector, to the silva ribosomal database. I uploaded the fasta silva database files onto the HPC, concatenated them, converted the U's-->T's, and then bowtie2 indexed the resulting fasta file. Now, we are having a problem when we want to map (bowtie2-align) our sequences to the silva database. Have you ever come across an error like this before? If so, how did you go about solving it?

#!/bin/bash
#$ -cwd
#$ -j y
#$ -S
#$ -pe smp 40
#$ -M x@xxx.edu -m be

export silva_db=/mnt/research/hhorch/term_RNAseq_analysis/silva_db/silva_db/silva_db
export r1=/mnt/research/hhorch/term_RNAseq_analysis/trimmed_reads_cor_val/fixed_1C_1_R1.cor_val_1.fq
export r2=/mnt/research/hhorch/term_RNAseq_analysis/trimmed_reads_cor_val/fixed_1C_1_R2.cor_val_2.fq
export summary=/mnt/research/hhorch/term_RNAseq_analysis/rRNA/1C_1_rrna_summary.txt
export mapped=/mnt/research/hhorch/term_RNAseq_analysis/rRNA/rrna_mapped_1C_1
export unmapped=/mnt/research/hhorch/term_RNAseq_analysis/rRNA/rrna_unmapped_1C_1
export single_mapped=/mnt/research/hhorch/term_RNAseq_analysis/rRNA/single_mapped_1C_1
export single_unmapped=/mnt/research/hhorch/term_RNAseq_analysis/rRNA/single_unmapped_1C_1

bowtie2 --very-sensitive-local --phred33  -x $silva_db -1 $r1  -2 $r2 --threads 40 --met-file $summary --al-conc-gz $mapped --un-conc-gz $unmapped --al-gz $single_mapped --un-gz $single_unmapped -S "$name"_out.sam

Where silva_db is the path to the bowtie2 indexed silva database, r1 and r2 are the input reads, and summary, mapped, unmapped, single_mapped, and single_unmapped are the output from the alignment.

I have tried many variations of this script with pretty much just troubleshooting the --very-sensitive-local and --threads parameters. For the first part, I have also tried --sensitive-local, --fast-local, and --very-fast-local, with different combinations of threads and these moosehead commands:

qsub -l 10g=true -pe smp 16 silva_test9.sh
qsub -l gpu=1 -pe smp 16 silva_test10.sh
qsub -l 10g=true -pe smp 40 silva_test5.sh

The part in pink is where I tried different numbers of threads.

Within the script itself, I have also played with the number of --threads. In addition to using cpu as a variable, I have just put the number of threads there (12, 16, 40, etc). The maximum amount of RAM we can use is 360gb.

Most of the time I have been aborting the job after a couple of hours and even a few days sometimes. Output files are created in the correct location, but there is nothing in them. The error message I have been receiving is "bowtie2 died with signal 9 (KILL)".

Thank you so much!

Meera

software-error alignment Assembly bowtie2 • 7.1k views
ADD COMMENT
0
Entering edit mode

Your email address was redacted from the post above.

ADD REPLY
0
Entering edit mode

It seems that you run out of memory.

ADD REPLY
1
Entering edit mode

Hello! Yes, I think we have been running out of memory. Are you familiar with any other ways to map our sequences to this database?

ADD REPLY
0
Entering edit mode

I think it is a good way. Just ask how to increase the memory amount allowed to your job to your local bioinformaticians / cluster (HPC) managers. With the Slurm job manager (sbatch or SlurmEasy to launch scripts) this can be easily done at the beginning of the shell script you use to launch bowtie with the following lines:

#! /bin/bash
#SBATCH -p bioinfo #name of partition 
#SBATCH -N 1 # number of nodes
#SBATCH -n 20 # number of cores   
#SBATCH --mem-per-cpu 500 # quantity of memory in MB per core (20*500)

Note that if the cluster is using SGE to manage jobs (qsub, etc), the above lines will not work, they are specific to Slurm. For SGE, please use the following:

qsub -cwd -l h_vmem=100G bowtie.sh

Note that h_vmem is per core, so if you use multiple cores with --pe env 12, h_vmem will be multiplied by 12.

ADD REPLY
0
Entering edit mode

Hi Gautier,

Thank you! I have been in contact with our HPC - expert and he has helped me up the RAM as much as possible. Unfortunately, it still has not been working. Would you recommend another alignment such as BWA?

ADD REPLY
0
Entering edit mode

Yes you can try using BWA for ungapped mapping. If you think that your alignments might have gaps, please use STAR (it could as you are analysing transcripts right?).

ADD REPLY
0
Entering edit mode
21 months ago
Raygozak ★ 1.4k

There's a simple workaround this, that's to split the reads (paired or single) into smaller chunks, process the smaller chunks since mapping one read is independent of the rest, and you don't have to use thiings like bwa. then you can merge the bams and that's it.

ADD COMMENT

Login before adding your answer.

Traffic: 1651 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6