Ubuntu 20.04 Crash
0
0
Entering edit mode
2.8 years ago
bala • 0

Hi

I am working in bioinformatics metaphlan and human. My system has config of 64 gb ram with intel i9 processor with ubuntu 20.04 (swap 2gb space). The problem is when I ran the bioinformatics command it take more time and crash before the results.

I have no idea what to do. Some one kindly provide solution.

Thank you

Ubuntu • 2.2k views
ADD COMMENT
3
Entering edit mode

'Ran the bioinformatics command' - you need to be much more specific about what you were doing or there is no chance anyone will be able to help you. What software? What dataset? Did you use top / htop to profile the memory use of the application?

ADD REPLY
0
Entering edit mode

Hi I am doing Kneaddata and humann3 for identifying the functional gene fro shot-gun metagenome using command

kneaddata --input AST2R1.fastq  --input AST2R1.fastq -db $/home/plankton/Kneadata_DIR  --output /home/plankton/Metagenomics_AST_Thatha/CG_DN_935  --trimmomatic /home/plankton/anaconda3/share/trimmomatic-0.39-2 --thread 4

humann --input /home/plankton/Metagenomics_AST_Thatha/CG_DN_935/AST2.1/AST2R1_kneaddata.fastq  --output /home/plankton/Metagenomics_AST_Thatha/CG_DN_935/AST2.1/AST2_humann --nucleotide-database /home/plankton/INSTALL_LOCATION --thread 8

In both the cases the system took long time and hanged.

System config 
Ubuntu 20.04.3 LTS
df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             32G     0   32G   0% /dev
tmpfs           6.3G  2.0M  6.3G   1% /run
/dev/sdb3       457G  283G  152G  66% /
tmpfs            32G  112K   32G   1% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs            32G     0   32G   0% /sys/fs/cgroup
ADD REPLY
1
Entering edit mode
kneaddata --input AST2R1.fastq  --input AST2R1.fastq -db $/home/plankton/Kneadata_DIR  --output /home/plankton/Metagenomics_AST_Thatha/CG_DN_935  --trimmomatic /home/plankton/anaconda3/share/trimmomatic-0.39-2 --thread 4

Please correct following first:

  1. Two identical inputs (--input AST2R1.fastq --input AST2R1.fastq)
  2. Check db variable (-db $/home/plankton/Kneadata_DIR)

In general variables do not have "/" in their name.

ADD REPLY
0
Entering edit mode

Thank you Even with correct inputs (--input AST2R1.fastq --input AST2R2.fastq) it takes more time and hanged at the end

ADD REPLY
2
Entering edit mode

I realize this is not what you asked about, but will offer you an unsolicited advice. Having 2 Gb swap with a 64 Gb RAM is like not having swap at all. It seems like you have a relatively small disk so I understand the impulse not to "waste" the disk space, but there will come a time when you won't be able to do things without a swap, and you basically don't have it.

The opinions vary as to what swap size should be - anywhere from 0.5-2x the size of RAM. You have about 3% which is tiny. I am guessing based on your partition occupancy that you have created this system recently, so you still have time to expand the swap size.

ADD REPLY
0
Entering edit mode

Thank you Mensur Dlakic If I increase the swap space to 32 gb will it work.

Thank you

ADD REPLY
0
Entering edit mode

I have no idea whether it will work or not in this particular case because you are not telling us anything about the size of your data, or the resource profile (see the suggestion about top / htop). Generally speaking, the swap size you have is tiny for ANY application.

ADD REPLY
0
Entering edit mode

Completely off topic, but in my experience once my program exceeded the available RAM and used swap it was as good as dead, so I usually don't attribute any swap space at all. Admitted, I haven't tried on super modern hardware with ultrafast SSDs as I'd rather looked for some more powerful server. So I'm a bit curious: did you really make good experiences with swapping applications?

ADD REPLY
1
Entering edit mode

Using a swap is not fun, especially with slow disks. It also depends on how many times a given program needs to swap, and I think swapping is most effective with programs that have a small peak of memory usage, but for the most part can fit into RAM during the run. For example, I have been able to run a program that requires ~100 Gb of memory on a 64 Gb computer with a 64 Gb SSD swap. So it is a way, however slow, to surmount memory shortcomings here and there, but I have since moved on to a 256 Gb server (still with 90 Gb SSD swap!).

ADD REPLY
0
Entering edit mode

Thank you for your reply My file size is 14 gb (paired shot-gun metagenome sample) and the reference database of 16.8 gb
Now i have increased my swap space to 32 GB, and I am running the same command again.

top - 15:29:21 up 22:05,  1 user,  load average: 4.37, 3.32, 1.82
Tasks: 370 total,   1 running, 369 sleeping,   0 stopped,   0 zombie
%Cpu(s): 24.6 us,  2.0 sy,  0.0 ni, 73.3 id,  0.0 wa,  0.0 hi,  0.1 si,  0.0 st
MiB Mem :  64089.1 total,    383.2 free,   8590.4 used,  55115.5 buff/cache
MiB Swap:  32768.0 total,  32653.8 free,    114.2 used.  54731.3 avail Mem 

I'll update once the command finish

Thank you

ADD REPLY

Login before adding your answer.

Traffic: 1423 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6