Hi there
I have a quick question. I am dealing with some gigabits of fasta files and I am trying to develop a statistical analysis of protein composition (count aacs, kmers and using some probabilistic models) and I need some random data to use.
I have the done the code to randomization and my question is I need to randomize all the data? The randomized data is based int the 'original fasta files'. It kind of reading the data and applying the randomization in each of the sequences keeping the length and the aac composition of each sequence.
For ex:
>seq 1
iterable
to:
>random_seq1
rtaieelb
I am trying to Identifies and count and doing some statistics about:
-string composition -short sustrings of k length -estimation of the distribution of characters and substrings on the data set
-distribution of k length substrings sharing the same composition is random or not
-Identification and analysis of outliers(over represented and under represented) substrings
-search databases looking for presence of these substrings in structural and functional portion of sequences
Thank you for your time and attention!
Paulo
I probably need to know better what statistical test are you applying here, but essentially you need to randomize all sequences several times IMHO
Hi JC I thinking in Bonferroni correction, two-sided Fisher’s exact test for kmers, at least for the moment, but I am checking other possibilities.
Thanks
What you're trying to do is unclear. What do you need random data for?
Comments were add to the initial question
None? 8( >>>>>>>>>>>>>>>>>>>>