To preparing larger Dataset for velvet, I merged two unmapped human reads files and now I have 3 questions ?!
1) Is this the right way to increase the quality of velvet output (I mean length of contig which I produced) ?!
2) After merging the files ,New file contains near 320,000,000 seq but when I run velvet on 72Gb memory it used all the memory and then run time increased (near 4 days to complete hashing part). So Do you know how much memory Should I allocate?!
3) Can I split this new file to 320 files and then run velvet on each of them parallel and in the end merge the velvet output to use as one assembly result?!?
p.s: I use colorbased version of velvet.