Forum:What bioinformatics codes would you use for performance benchmarking on a new system?
4
0
Entering edit mode
3 months ago
Dave Carlson ★ 1.9k

Hi All,

The team I work for is considering some new hardware that we think has the potential to show good performance on multithreaded bioinformatics/genomics software. To test this, I want to put together some performance benchmarks and am considering which codes to include. My inclination is to test some of the standard alignment tools like:

  • bwa mem
  • minimap2
  • bowtie2
  • ncbi blast+
  • diamond
  • STAR

All of these are multithreaded and would allow us to see how well they scale to several or many cores. However they all involve some non-trivial amount of I/O, which might be a downside.

If you were performance benchmarking a new system using bioinformatics tools, what codes would you test?

Thanks! Dave

benchmarking • 1.0k views
ADD COMMENT
0
Entering edit mode

However they all involve some non-trivial amount of I/O, which might be a downside.

So you are not planning to configure a corresponding upgrade for storage side?

I am curious as to what this performance benchmark is supposed to demonstrate? Satisfaction that something now runs in 2 mins that used to take 20? Justification and/or bragging rights for having acquired a speedy system?

ADD REPLY
0
Entering edit mode

Seconded. Are you looking for fast performance of a single task/pipeline, or non-degrading performance when 5-15 people in the lab are all running different pipelines on the server, possibly targeting the same filesystem?

ADD REPLY
0
Entering edit mode

These are good questions. We will be getting a new storage upgrade, I imagine, but we will be running initial tests of the hardware in a vendor-controlled instance, which won't use the storage system that it would eventually run on (if we purchase it). The goal is not bragging rights since we have not actually purchase anything yet, but we want to assess the performance of some of the applications that might eventually run on the hardware.

One of the claimed selling points of the system is that it seems to show unusually good scaling for threaded, CPU-bound codes. I was hoping to test some bioinformatics codes to see if they show the good scaling that some others (reportedly) have shown, but since the storage system we'll eventually use is an open question, I didn't want to have to worry about too much I/O during the tests, if I can help it. That may not be a sensible idea, though.

I think we're mostly interested in testing the scaling of a single multi-threaded task right now. Finding a popular bioinformatics application to test that would be useful for us. I can always launch a bunch of nf-core pipelines if we want to test the throughput performance of lots of small applications running at the same time.

ADD REPLY
1
Entering edit mode

You can try pseudoalignment (like the kallisto quant command in kallisto) because it doesn't write anything to disk (no BAM files, no temporary files, etc.) while processing data.

ADD REPLY
4
Entering edit mode
3 months ago

For benchmarking and getting timing across multiple runs I would highly recommend hyperfine which shows speedup against number of cores after a parameter sweep.

https://github.com/sharkdp/hyperfine

eg

Final bwa command

hyperfine uses 10 runs by default. Use -r 2 for 2 runs

hyperfine -r 2 --parameter-scan cores 8 32 -D 8 'bwa mem -t {cores} /path/chromosomes.fasta  1m_R1.fastq.gz R2.fastq.gz >test.sam' 

Another example using parallel gzip - pigz to check effect of core usage

hyperfine -r 1 --parameter-scan cores 1 3 'pigz -f -k -p {cores} x.fasta' --show-output --warmup 1

Output

Summary
  'bwa mem -t 32 chromosomes.fasta CIM_1m_R2.fastq.gz >test.sam' ran
    1.06 ± 0.01 times faster than 'bwa mem -t 24 chromosomes.fasta  CIM_1m_R1.fastq.gz CIM_1m_R2.fastq.gz >test.sam'
    1.33 ± 0.02 times faster than 'bwa mem -t 16 chromosomes.fasta  CIM_1m_R1.fastq.gz CIM_1m_R2.fastq.gz >test.sam'
    2.45 ± 0.01 times faster than 'bwa mem -t 8 chromosomes.fasta  CIM_1m_R1.fastq.gz CIM_1m_R2.fastq.gz >test.sam'
ADD COMMENT
0
Entering edit mode

Wasn't familiar with that tool, but it looks really useful. Thanks!

ADD REPLY
2
Entering edit mode
3 months ago
GenoMax 147k

Are you going to run directly on hardware or will there be any virtualization layer present?

bwa and minimap2 are well written packages and should allow you to test scaling with increasing number of cores using a standard sample and genome index. You already appreciate that the I/O system will come into play at some point and that may be noticeable in the execution times as you scale beyond a certain number of cores (that may be between 16-32). blast+ should fall in this category but it will be dependent on I/O because of the large database sizes (using new core_nt may be a good option here). diamond will likely require larger amounts of RAM so that (and blast+) could be good candidates for that load test.

Running large singleRNA seq (cell/nuclei) datasets may also be a useful thing to look at.

If GPU's are part of the equation then re-basecalling large promethION datasets with dorado and AlphaFold2 will be good options to test.

While I am partial, trying BBTools suite (it is multi-threaded, I would try clumpify.sh to stress test with a large dataset) will give you a reference point for applications that require java.

ADD COMMENT
0
Entering edit mode

I believe should get direct access to the hardware. Thanks for your suggestions!

ADD REPLY
2
Entering edit mode
3 months ago
Michael 55k

Maybe, your set is missing an assembly task? I would add Flye with a multithreading option to your benchmark. Also, could think of some R/Bioconductor tasks, e.g. DESeq2 with multicore settings. Variant calling pipelines such as GATK may also be useful to include. Phylogenetics and population genomics tools are another type of application that usually take a long time to compute and may be included, e.g. IQ-Tree, MrBayes, AdmixtureBayes.

It all depends on what you will be using the hardware for.

ADD COMMENT
0
Entering edit mode
3 months ago
JustinZhang ▴ 120

I don't recommend benchmark computing platforms using bioinformatic tools since they are not made for this, unless you can make sure the software version and dependencies of such one tool is consistent. See this issue as an example.

  1. For CPU and RAM usage, please use Phoronix Test Suite. It's open-source, safe, widely-used and comparable.
  2. For GPU usage, please refer to this page.
ADD COMMENT
0
Entering edit mode

While these programs may be useful for general purpose benchmarking in this case OP actually wants to see the difference made by using varying amounts of resources on the bioinformatics applications they want to ultimately use. In some sense the benchmarking is not for the hardware per se it is for the actual workload that will generally be done on that hardware.

ADD REPLY
0
Entering edit mode

Sir, I understand you, as the bioinformatic computing job is much different from the standard general benchmarking. I've also tried use docker and singularity to package tools and custom scripts and test them.

But it's better for me to use Phoronix Test Suite and alike tools. If you take a look at my profile, you can see one of my questions titled "Choose between AMD or INTEL Server CPU in 2021-2022" on biostars 3 years ago.

I was just trying buying one HPC, and Phoronix Test Suite was finally the fastest and easiest way to get an objective result and allowed me to compare it with the existing results on the Interent.

Based on my experience, I recommend Phoronix Test Suite and alike tools to broaden the comparable range.

ADD REPLY

Login before adding your answer.

Traffic: 1684 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6