Entering edit mode
5.6 years ago
lagartija
▴
160
Hi,
I am mapping reads to genomes to estimate their abundance in several metagenomic samples. My problem is that when I filter for reads that are over 33 nt long I get a complete different result than when I keep all the reads. Which criterias should I apply to do the mapping ?
Thank you very much
Doesn't that make sense? If you use different input reads you will get different results. It would be useful if you could explain the reason for deleting reads longer than 33 nt?
PS: Please refer to: Brief Reminder On How To Ask A Good Question and How To Ask Good Questions On Technical And Scientific Forums
The reason that I thought about removing short reads is to get less false positives. I looked at the first alignments and saw that most of the matches were short reads, often with repeats. I thought that if only short reads are matching, it might be only false positives.