There are two/three different meanings of the term "normalisation" here that I think might be causing confusion.
The first use of normalization is a process applied to remove redundancy in De Brujin graph assembly. It is connected to estimating which transcripts are present, not in what quantity they are present. it is used in conjunction with de-novo assembly tools such as Trinity. I think this is what you are referring to when you ask whether you need to normalize even though you have removed redundant transcripts. I believe such normalization is built into the tools these days and you don't have to worry about whether to use it or not.
The second meaning, closely related to the first, is the idea of removing duplicate reads from a dataset because they might be created by PCR duplication of the same original read. Research continues on this problem, but in general the advice is not to de-duplicate(/normalise) RNA-seq data.
The final meaning is the one @ATPoint refers to above, it involves making sure that the same count in two different samples means the same thing. This normalization is crucial and you absolutely must do it. Its obvious to see that if you sequenced 1 million reads in one sample, an got back 100,000 reads from a gene, and sequenced 100,000 reads in another sample, and got back 10,000 reads for the same gene, you wouldn't want to directly compare 100,000 to 10,000. Hence the need to normalize. Luckily this sort of normalization is very quick and easy in modern RNAseq analysis packages.
Thanks for nice explanation.