Problem with GPU guppy_basecaller and SLURM
0
0
Entering edit mode
18 months ago
kenneditodd ▴ 50

Hello,

I am having trouble running the GPU version of guppy on a cluster using SLURM. I have two guppy scripts called guppy_pass1.sh and guppy_pass2.sh. In the pass 1 script I basecall to get look at the 'raw' data without additional filtering/trimming. In the pass 2 script I basecall to look at the 'cleaned' data with additional trimming and filtering. My pass 1 script always works and my pass 2 script always fails and the job is dumped.

Here is my guppy_pass1.sh script. The completed jobs stats are commented out.

#!/bin/sh
#SBATCH --job-name guppy_pass1
#SBATCH --mem 2G
#SBATCH --mail-user email@email.com
#SBATCH --mail-type END,FAIL
#SBATCH --output logs/%x.%N.%j.stdout
#SBATCH --error logs/%x.%N.%j.stderr
#SBATCH --partition gpu
#SBATCH --gpus=1
#SBATCH --time 00:30:00 ## HH:MM:SS

# Source settings
source $HOME/.bash_profile

# Set variables
in=/path/to/fast5/dir
out=/path/to/fastq/dir
cfg=rna_r9.4.1_70bps_hac.cfg

# Run guppy
# Using ~20% of fast5 files
guppy_basecaller --input_path $in \
                 --save_path $out \
                 --config $cfg \
                 --compress_fastq \
                 --records_per_fastq 0 \
                 --disable_pings \
                 --device cuda:all \
                 --recursive \
                 --chunks_per_runner 1024

# KEY
# --input_path - path to directory with fast5 fiels
# --save_path - path to where you will save basecalled fast5/fastq files
# --config - path to configuration file, in the data directory of the guppy download, specific to kit and flow cell
# --compress_fastq - Compress fastq with gzip
# --records_per_fastq - Maximum umber of records per fastq file, 0 means use a single file (per run id, per batch)
# --disable_pings - disable the transmission of telemetry pings. By default, MinKNOW automatically send experiment performance data to Nanopore
# --device - specify GPU device, options are 'auto' or 'cuda:<device_id>', use cuda:all if you have more than on cuda GPUs on computer
# --chunks_per_runner - Maximum chunks per runner, this # has the greatest affect on GPU efficiency

# BY DEFAULT
# chunk size:         2000
# chunks per runner:  512
# minimum qscore:     7
# num basecallers:    4
# runners per device: 4

# JOB STATS
# Cores: 1
# CPU Utilized: 00:02:21
# CPU Efficiency: 62.11% of 00:03:47 core-walltime
# Job Wall-clock time: 00:03:47
# Memory Utilized: 1.06 GB
# Memory Efficiency: 52.80% of 2.00 GB

This is my guppy_pass2.sh script. I have tried tweaking some parameters and it fails every single time. ..

#!/bin/sh
#SBATCH --job-name guppy_pass2
#SBATCH --mem 50G
#SBATCH --mail-user email@email.com
#SBATCH --mail-type END,FAIL
#SBATCH --output logs/%x.%N.%j.stdout
#SBATCH --error logs/%x.%N.%j.stderr
#SBATCH --partition gpu
#SBATCH --gpus=1
#SBATCH --time 00:30:00 ## HH:MM:SS

# Source settings
source $HOME/.bash_profile

# Set variables
in=/path/to/fast5/dir
out=/path/to/fastq/dir
cfg=rna_r9.4.1_70bps_hac.cfg

# Run guppy
# Using ~20% of fast5 files
guppy_basecaller --input_path $in \
             --save_path $out \
             --config $cfg \
             --compress_fastq \
             --records_per_fastq 0 \
             --disable_pings \
             --device cuda:all \
             --recursive \
             --chunks_per_runner 1024 \
             --min_qscore 10 \
             --trim_adapters \
             --do_read_splitting \
             --max_read_split_depth 2 

# KEY
# --input_path - path to directory with fast5 fiels
# --save_path - path to where you will save basecalled fast5/fastq files
# --config - path to configuration file, in the data directory of the guppy download, specific to kit and flow cell
# --compress_fastq - Compress fastq with gzip
# --records_per_fastq - Maximum umber of records per fastq file, 0 means use a single file (per run id, per batch)
# --disable_pings - disable the transmission of telemetry pings. By default, MinKNOW automatically send experiment performance data to Nanopore
# --device - specify GPU device, options are 'auto' or 'cuda:<device_id>', use cuda:all if you have more than on cuda GPUs on computer
# --recursive - searches for input files recursively
# --chunks_per_runner - Maximum chunks per runner, this # has the greatest affect on GPU efficiency
# --min_qscore - minimum qscore threshold for the reads to pass
# --trim_adapters - trim adapters from the sequences in the output files    
# --do_read_splitting - Perform read splitting based on mid-strand adapter detection
# --max_read_split_depth - The maximum number of iterations of read splitting that should be performed

# BY DEFAULT
# chunk size:         2000
# chunks per runner:  512
# minimum qscore:     7
# num basecallers:    4
# runners per device: 4

# JOB STATS
# State: FAILED (exit code 139)
# Cores: 1
# CPU Utilized: 00:00:08
# CPU Efficiency: 88.89% of 00:00:09 core-walltime
# Job Wall-clock time: 00:00:09
# Memory Utilized: 1.12 MB
# Memory Efficiency: 00.00% of 50.00 GB

Here is my standard error

/var/spool/slurmd/job21345/slurm_script: line 34: 3859554 Segmentation fault      (core dumped) guppy_basecaller --input_path $in --save_path $out --config $cfg --compress_fastq --records_per_fastq 0 --disable_pings --device cuda:all --recursive --chunks_per_runner 256 --min_qscore 10 --trim_adapters --do_read_splitting --max_read_split_depth 2

Here is my standard output

chunk size:         2000
chunks per runner:  1024
minimum qscore:     10
records per file:   0
fastq compression:  ON
num basecallers:    4
gpu device:         cuda:all
kernel path:        
runners per device: 4

Use of this software is permitted solely under the terms of the end user license agreement (EULA).
By running, copying or accessing this software, you are demonstrating your acceptance of the EULA.
The EULA may be found in /tools/ont-guppy-gpu/bin
Found 25 input read files to process.
Init time: 603 ms

0%   10   20   30   40   50   60   70   80   90   100%
|----|----|----|----|----|----|----|----|----|----|
*******************

In my guppy basecaller log if have the error line below. I've seen some people post it on nanopore community but don't know the fix yet.

lamp_arrangements arrangement folder not found: /tools/ont-guppy-gpu/data/read_splitting/lamp_arrangements

I need help! I have talked to cluster admin and they are stumped as well but have no knowledge on guppy and bioinformatics. Any thoughts, questions, concerns?

nanopore guppy • 1.9k views
ADD COMMENT
0
Entering edit mode

Did you try to increase the memory or reduce the number of --chunks_per_runner ?

Are you using Guppy 4.4?

There is an issue related to it here: https://github.com/nanoporetech/rerio/issues/17

ADD REPLY
0
Entering edit mode

I tried running it with --chunks_per_runner = 256 and same error message. I have guppy version 6.8

ADD REPLY
0
Entering edit mode

Just for testing please use only 10 and see if it ran successfully

ADD REPLY
0
Entering edit mode

Nope - but I just noticed another error message in the guppy log. I've seen some people have the same problem on nanopore community but not sure how to fix. I added the error message to the post.

ADD REPLY
0
Entering edit mode

Can you post the error message?

ADD REPLY
0
Entering edit mode

Sorry thought I did - it's now updated

ADD REPLY

Login before adding your answer.

Traffic: 1512 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6