Dear all,
I write several small scripts for sequence analysis, s1.pl, s2.pl s3.pl ect. I have to run them one-by-one using commands: perl s1.pl in.fa > out1.txt
, followed by perl s2.pl out1.txt > out2.txt
, and then perl s3.pl out2.txt > out3.txt
...
I want to combine them, but trying the following structure fails.
open IN, 'in.fa';
open OUT, '> out1.txt'
.....
close IN; close OUT;
open IN2, 'out1.txt';
open OUT2, '> out2.txt'
.....
close IN2; close OUT2;
So could anyone give me a solution? Thank you very much!
don't open/close files but use STDIN && STDOUT:
perl s1.pl < in.fa | perl s2.pl | perl s3.pl > out3.txt
I don't see any relevance to bioinformatics in this question so I'll make this a comment. The solution would be to create separate functions for each script which would be reusable and testable. The other approach would be to use pipes, as Pierre suggested, but consider whether you need 3 or more scripts when 1 would probably suffice (in other words, why close a file just to open it again and continue processing?). In either case, take a look at the Perl docs for open to see how to open a file in Perl (also see the examples on the Perl Maven site). From the command line, you can type
perldoc perlfunc
to see the best practices for using open(), and the perlsub docs (perldoc perlsub
) describe how to write subroutines (one example shows you how to get input from the command line).Agree that as it stands, this is a pure Perl programming question. Please indicate relevance to a bioinformatics research problem.
The brief answer is that when you find yourself solving a problem using multiple small scripts, it is time to implement them as functions, or methods in a module.