Forum:How to imagine the future of bioinformatics?
1
2
Entering edit mode
8.2 years ago
biotech ▴ 570

Ten, 20, 30, 50 years from now.

I would imagine plenty of data ready to analyse, with this old fashioned '-omics' concepts almost forgotten ;) , possibly due to data integration, both at wet and in silico levels. Also, lab procedures to get data being quite fast.

What about you guys?

bioinformatics • 2.9k views
ADD COMMENT
5
Entering edit mode

10 , 20, 30, 50 years from now: everyone is looking for some new antibiotics, but that's too late.

ADD REPLY
2
Entering edit mode

We may have found some new ones on Mars.

ADD REPLY
1
Entering edit mode

yes and the mars mission also brought back a new viral strain that kills the population that fled inland because of the rising sea levels ...

ADD REPLY
5
Entering edit mode

Current analysis will have become highly automated leading to the reduced need for bioinformatics specialists, the community responds by developing increasingly complex forms of analysis to remain employed.

Illumina continues to soak up IP to keep people buying their HiSeq 1000000 machines.

Oxford Nanopore still hasn't released their nanopore sequencers to the public.

NCBI still provides poor documentation for all their tools.

Someone invents SuperMassive Data, the next step after Big Data. Continues to convince managers/etc that it is a totally new way to deal with the huge piles of data. The market becomes flooded with hundreds of tools doing the same things that totally aren't reinventing the wheel for no real reason.

PacBio introduces their RLY-SMRT sequencing, capable of 500Mb reads with 30% error rate.

People are still doing RACE to sequence the termini of viral genomes.

ADD REPLY
0
Entering edit mode

People are still doing RACE to sequence the termini of viral genomes.

(no) Thanks for dashing hopes of many :-)

ADD REPLY
2
Entering edit mode

Quite frankly. 20 years ago no one could imagine where bioinformatics is today, so I don't think it's possible to imagine where we are in 20 years from now. But I am sure it will be still relevant. :p

ADD REPLY
1
Entering edit mode

In 1953 DNA-sequence principle was discovered by Crick.

Some years later Sanger-sequencing technique was suggested.

In 2001 to sequence a small bacteriphage took from several months to half a year,

but there was almost no mistake in the sequence.

In 2006-2010 scientists learned about human genome. but it is still a mistery.

Now NGS-approaches are 100-1000 times cheaper than even 20 years ago, but the quality of such sequences

is still rather far from Sanger results...

ADD REPLY
1
Entering edit mode

but the quality of such sequences is still rather far from Sanger results...

I don't think that is true.

ADD REPLY
0
Entering edit mode

It's true if you consider a single Sanger read versus a single NGS read. However, this is why no use uses 1X coverage, so it's not a fair comparison. At 10X or 100X, which is a more realistic scenario, it's a different story.

ADD REPLY
0
Entering edit mode

Do we have numbers for this, comparing, say longer read NGS to Sanger?

I know they preferred Sanger for clinical targeted/gene-specific sequencing in my previous lab, does any other clinical lab use different techniques?

ADD REPLY
2
Entering edit mode

I know they preferred Sanger for clinical targeted/gene-specific sequencing in my previous lab, does any other clinical lab use different techniques?

That may partly be because FDA has not approved NGS (exception: MiSeq DX for CF and some others) for wide-spread diagnostic use.

ADD REPLY
0
Entering edit mode

Thank you, genomax2!

ADD REPLY
0
Entering edit mode

Lots of places use NGS for clinical applications. For example, Foundation Medicine has been doing it for years and built a large company based on that.

ADD REPLY
1
Entering edit mode

I'm still writing my PhD thesis.

ADD REPLY
1
Entering edit mode

When should we expect it to be done? Another 10, 20 or 50 years? Hurry up already, we are holding a spot for you on the Mars bioinformatics mission :-)

ADD REPLY
0
Entering edit mode

DNA alignment is still problematic. There are still multi-million dollar efforts to GWAS anything.

ADD REPLY
0
Entering edit mode

Hey, someone was talking about this 6y ago. Future Directions In Bioinformatics Future Directions In Bioinformatics

ADD REPLY
0
Entering edit mode
8.2 years ago
5heikki 11k

As data accumulates, people realize that much of the generated sequence data is unusable due to noise/lack of precise metadata. So I think one trend is that data will be splintered into smaller manually curated sets, many of which will either require a license or will simply not be available to public at all. As an example, GenBank is infested by lack of precise/correct metadata concerning e.g. taxonomy. I think in 20 years we will certainly have very long and accurate reads, so the amount of annually generated data may actually decrease dramatically and the problems that algorithms have to solve then will often be less difficult than they're now. Hard to say how things are 50 years from now. My guess is that Linux will still be around. It will probably outlive us all and even guide space ships to other planets, some day. That is unless systemd takes over the kernel too..

ADD COMMENT

Login before adding your answer.

Traffic: 1990 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6