We now have PacBio sequencers, which can produce longer reads albeit with loads of errors. Will there come a time when a class of magical sequencers will be able to correctly read the genome end to end without error? If so, then lots of hot problems like assembly, read alignment etc. will die out. What all areas of bioinformatics will remain open for bioinformaticians who work on the algorithms/programming aspect of things then? Consequently, once you have this magical sequencer, what kinds of algorithmic questions will arise anew? Or, the emergence of such a sequencer will mean that only biologists will remain, and there will be no need of any bioinformatician?
Actually, this type of sequencer already exists, as Oxford Nanopore sequencing on MinION and PromethION has no size limitation. The problem is just to keep the DNA intact while doing the library prep. The largest sequenced fragment (for now) is 510kb.
What about errors in these reads?
Depending on the chemistry used (1D vs 2D prep) the accuracy is currently respectively about 92% and 96%. So that's indeed not without errors and people working with Illumina will say that those are a lot of errors, but actually it's fine. If coverage is sufficient and with some polishing, you can get to ~99.9% accuracy.
Short answer, as nicely summarised by Devon is: OK, so now you have sequence, get on with the interesting stuff - analysis and interpretation.
It depends on the throughput of the magical sequencer. If it's low, then the only problems it solves are genome assembly/phasing and variant calling. Any question that's based on read counts (RNA-Seq, ChIP-Seq, etc) will remain.
And someone still has to analyze the data, so bioinformaticians will remain employable for the foreseeable future :-).