When running salmon quant on multiple reads, I get the error:
"/var/lib/slurm-llnl/slurmd/job00431/slurm_script: line 12: 213183 Segmentation fault (core dumped) salmon quant -i /home/user/fol2/reference_transcriptome/transcripts_index -l A -r /home/user/fol2/raw_fastq/D_{1..3}/.gz --validateMappings -o transcripts_quant"
Everything seemed to run normally before. I read online that a segmentation fault means that the program is trying to access memory it doesn't have access to. Currently, my server only has 1 node available (that I am sharing with someone else who has an active program running).
I am curious if the issue then is that the server doesn't have enough memory available. I am a bit confused because if that was the error, wouldn't slurm tell me in the beginning before it started?
Additionally, if this is a salmon issue (though I couldn't find anything online), it would be very helpful to know that too.
Thanks so much!
I don't know the cause of the segfault, but you probably want to think twice about your Salmon command.
Salmon will output only one quantification, as it considers multiple files as technical replicates which one wants to quantify together. With
D*_{1..3}/*.gz
, you are globing over all files from all folders matching the pattern. Judging by the names of your input files, this isn't what you want - you probably want to quantify all files from each sample folder together (and don't forget to exclude the test_clean.fq.gz file.Thanks so much! It looks like that was the cause of the segfault (maybe because it was trying to quantify all 16 files together, it was too memory extensive?)
I fixed it by adding the directories to a .txt and looping over the txt, calling salmon quant one-by-one. I'm not sure I did this in the best way (for example in STAR, there is a way to have multiple files aligned into one output file, but I did not seem to see this with salmon.)
Once it finishes running, do you know if I can just combine the different output files on their rows?
Please post the salmon log. Hoe much memory did you request from that job? Please shoe the slurm submission headers/command.
Thanks for your response! Here's the submission with the command: