Hello,
Apologies as this question has been asked before -- I'm really trying to wrap my head around the motivation behind designing neural network architecture.
I'm designing a convolutional neural network in Pytorch to classify DNA sequences based on this paper: https://www.researchgate.net/publication/301703031_DNA_Sequence_Classification_by_Convolutional_Neural_Network; however, I'm very confused by the meaning behind the parameters of and what arguments I'm supposed to pass into the convolution layer.
From this paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4908339/ the task of classifying DNA sequences can be done using a 1D convolutional layer with input channel size (L, 4) where L is the fixed length of the input DNA sequences and 4 because of one-hot encoding of the bases. Thus, would it be correct to define a nn.conv1D layer and pass in in_channels=(L, 4)?
Next where I'm confused is how to determine number of output channels -- from the Pytorch tutorial (https://pytorch.org/tutorials/beginner/introyt/modelsyt_tutorial.html):
A convolutional layer is like a window that scans over the image, looking for a pattern it recognizes. These patterns are called features, and one of the parameters of a convolutional layer is the number of features we would like it to learn. This is the second argument to the constructor is the number of output features. Here, we’re asking our layer to learn 6 features.
In this application, how many features are we asking the layer to learn? AFAIK the number of output features isn't a hyperparameter for a convolutional layer, unlike kernel size for instance, so it's not arbitrary and can be determined, but my question is how is this determined?
Lastly, is there a good general resource on understanding the inputs/outputs of the layers and what should be passed between them, or maybe an example Pytorch GitHub project that achieves something similar and is easy to understand and follow? I've been looking around for a concise explanation so anything here would be much appreciated! Thank you!