Forum:The Evolution of NGS: What’s New and What’s Next?
3
4
Entering edit mode
7 weeks ago

Hey everyone!

It’s been 7 years since I first kicked off a discussion (Where and how NGS techniques are heading for the next 5 years?) on where next-generation sequencing (NGS) techniques were heading. Time flies! So, I thought it would be great to revisit this topic and explore how much has changed—and what’s on the horizon for the next 5 years.

Looking Back: From the rise of Oxford Nanopore to the advancements in other sequencing technologies, what have been the standout developments in the past few years? How have these changes impacted research and clinical applications?

Looking Forward: What do you think the future holds for sequencing methods? Are there any emerging technologies or innovative approaches that you think will reshape the field?

Computational Methods: On the computational side, how have programming languages and analysis tools evolved? While Python, R, and Shell are still popular, what new languages or frameworks should we keep an eye on? Any tips for budding bioinformaticians on what to learn next?

Let’s share our insights, experiences, and predictions! Feel free to keep it light—perhaps share a funny anecdote or an unexpected twist from your work with NGS.

Looking forward to hearing everyone’s thoughts!

ngs • 871 views
ADD COMMENT
4
Entering edit mode
7 weeks ago
GenoMax 147k

Majority of NGS sequencing will become a commodity. Sourced out to the lowest cost provider, just like it happened with Sanger sequencing.

I wrote that in the last thread and it has mostly come true though not completely there just yet. This trend will likely continue further affecting mostly small/median academic sequencing centers. They will have to innovate and pivot to things other than simple sequencing.

Single cell everything is "technology du jour" with "spatial something or other" on horizon as the rising star. While single cell is likely going to become common it is probably not going to become universally applicable like plain RNAseq did. We are going to need curated databases of cell types to aid in classification for both of these technologies.

Long reads have become entrenched as predicted in prior thread and that will likely continue. Once the patent battles are fought/won/lost perhaps BGI's nanopore equivalent will introduce additional competition.

Dr. Keith Robinson (LINK) follows technological developments in sequencing technologies. Perhaps one of several players there will introduce something revolutionary in the next few years. For now nothing seems ready for imminent release.

rust seems to be rapidly making inroads with bioinformatics devs appears to be the designated `python` competitor. Just as `python` replaced `perl` in 2000s, `rust` may replace `python` in late 2020s. It may just be a generational thing. As older folks step-down/go away a new generation takes over. They generally have training in latest and greatest and rightfully want to make their mark.

Note: Redacting a part of my original comment based on clarification from software devs below.

ADD COMMENT
1
Entering edit mode

I agree with most of this, but I'm not sure about rust being a python competitor. I'm not sure they serve the same functions - unless python vs perl, where both were very much designed as high-level scripting and prototyping lanuages, where easy and speed of development was more important that performance.

I hope that single-cell doesn't become universal - its just entirely unnecessary in most cases. But I fear that people will do it anyway, just because its trendy.

As for the commoditisation of sequencing - I absolutely agree. The lastest quote I have for RNAseq is £85 a sample. For comparison, this is less than the cost of 2 boxes of gloves and a box of tips.

ADD REPLY
1
Entering edit mode

rust was not designed to be a competitor to python, it was designed as a competitor to C and C++ -

rust integrates really well with python and is becoming a platform of choice instead of C when it comes to developing high speed modules for python, see for example polars

https://docs.pola.rs/api/python/stable/reference/index.html

ADD REPLY
3
Entering edit mode
7 weeks ago

What I'd like to see in the next few years is:

  • long read read numbers get high enough for cheap enough that long read becomes standard for "quantitative" experiments (thinking particulalry of RNAseq et al).
  • Nanopore finally makes direct RNA sequencing a easy to implement consumer product.
  • A move towards alignment to graph based pan-genomes in human genomics, rather than linear references. The latest studies of complete genomes suggest each has 100s of mb of sequence that arn't part of the reference, either do to being entirely de novo, or due to being sufficiently different from the reference that they don't align.
ADD COMMENT
0
Entering edit mode

Nanopore finally makes direct RNA sequencing a easy to implement consumer product

Technically this is possible now. People likely do this though I don't have first hand experience yet.

ADD REPLY
0
Entering edit mode

I know its a technical possiblity. But people I know that do this say its still very much an experimental technique. You need to learn the skill. You can't, yet, just pick up the kit from the shelf and be confident its going to work, nor do any service providers I know provide it as a service.

What I'm looking forward to is when its a consumer product, rather than an expert technique.

ADD REPLY
3
Entering edit mode
7 weeks ago

In the single cell field, I feel like the focus is now more on modalities integration to analyze datasets throughout different angles, either analyzing them separately and make a story out of it, or try to use neural networks to make connection between datasets.

A massive amount of papers are coming out with various machine learning models to do modality predictions or detection of potential hidden features like enhancers and transcription factors. In my opinion, they are at the moment hard to rely on because those methods are too specifically trained and will fail if transpose to an alternative context. This area is growing so fast that there is no time to test new models to give feedback to improve them. Every week a new method is out and no validated and standardized methods stand out.

The spatial aspect is taking over single cell little by little. It is still a challenge to get spatial resolution and single cell resolution at the same time, but we are slowly going there. Genes panels for trancriptomic datasets can be used to get both resolutions but I have no doubt it will soon be possible to get chromatin accessibility, histones marks, proteomic... both spatially and at single cell resolution.

Labs are trying DIY solutions to get single cell resolution in-house to not rely on 10Xgenomics too much, with for example Smartseq3.

As GenoMax mentioned, the number of single cell datasets and atlases is massive. One cornerstone would be to homogenize those datasets to have a curated database of cell type, disease contexts and development.

In the next few years, I think the association of perturbation screening (by activation, inactivation or know out) and single cell, will be a major focus to decipher the mechanistic of genes pathways regulation. From my experience, the bottleneck is to get the sweet spot of infection rate to have an homogeneous and sufficient number of cells infected by each single guide.

Last but not least, when single cell arose, post transcriptional events where not the major focus and everyone jumped on gene expression. Now that the hype is going down, looking at alternative splicing events is slowly coming back, at single cell resolution this time.

On a futuristic note, the number of single cell datasets is so gigantic that it would be possible to infer one modality using another via specific neural networks as autoencoders.

ADD COMMENT
1
Entering edit mode

homogenize those datasets to have a curated database of cell type, disease contexts and development.

Granted I've minimal experience with scRNAseq and apologies if this is off-topic here... I have the impression that integration is used too liberally in scRNAseq. I understand the convenience of combining datasets and working with just a representative, integrated one. But in doing so, don't you loose valuable information about the variability between experiments? There's seem to be an assumption that there exist a single cell atlas that we are trying to map (like a geographical atlas), but in contrast to a geographical atlas the expression atlas changes with time and between replicates. Isn't integration removing variation we are/should be interested in?

ADD REPLY
1
Entering edit mode

I was more thinking of a nomenclature homogenization rather than matrix counts integration. It would be indeed sketchy to integrate gene expression to a single representative dataset, because as you mentioned, each experiment is unique. However I do think that one should find the same cell types when looking at the same biological metadata (same organ region, disease context, time point...). Those would be interesting to integrate.

ADD REPLY

Login before adding your answer.

Traffic: 1842 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6