I think that RNA-seq pipelines for your bog-standard bulk experiment that are aimed at differential gene expression are so simple, that a) there is no great advantage in using a pre-built pipeline, and b) its no great demonstration of skill to say you've built your own.
Our standard RNA-seq pipeline is FastQC -> Cutadapt/Trimmomatic (depending on FastQC output) -> Salmon. Pretty much anyone can implement that, and I'm not sure what you gain from using NF-core in that situation. The skill in these simple RNAseq experiments is 1. interpreting QC metrics 2. Doing the downstream statistical analysis. Neither of which is really automatable.
If things are more complex, a prebuilt pipeline can be more helpful if there is one that does what you want, but I'd guess that despite all the complexities and options, when it actaully came to doing something out ot the ordinary, even something like the NF-core RNAseq pipeline wouldn't do quite precisely what you need.
For my first 4 years as a bioinformatician I only ran mapping through using a very complex, pre-built pipeline (built by my supervisor in this case). That meant that when I left, I didn't actaully know how to run STAR or Bowtie without it, which I think is a bad thing.
Note, that I'm not saying pipelines, built using proper workflow engines are a bad thing, or that you shouldn't use them or build them. Particularly if you have substantial number of samples. Just that you don't gain any advantage useing a complex all singing all dancing prebuilt one like NF-core.
Basically all I know today about bash, nextflow and containers I learned by writing my own pipelines. Ask yourself whether you can afford it in terms of spending time on writing pipelines versus getting other things done. It's a valuable learning experience.
I learnt a lot of what I know the same way. Its always been my philosophy to do things the hard, but self improving way, if thats possible.