State Of Computational Genomics
3
2
Entering edit mode
14.7 years ago
Suk211 ★ 1.1k

The other day on facebook, I posted this link and some of my friend started discussing about this article.

It's "Watson Meets Moore" as Ion Torrent founder Jonatha Rothberg introduces post-light semiconductor sequencing

few comments from my friends

Krishna : "I dont understand this rat race for next gen sequencing. While its true that a cheaper and faster sequencing technology would revolutionalize personal genomics, its more important to develop effective algorithms that could make sense of the zillions of data that would be generated. We do have hundreds of organisms sequenced, but still dont seem to understand a bit of the complexity of the genome!!"

Abhishek Tiwari @Krishna I could not agree more. People think by commoditizing the genome sequencing some day miracle will happen and we will be able to understand the complexity of Genome. I am afraid we are going to lost in data without any clue what we are looking for. See this in other way, diverting to much funding in these sequencing projects makes it very hard to sustain to bioinformatics research.

We were unison in observation that computational genomics is not at par with it's experimental counter part at the moment. I was wondering what do you guys think about it and how can we make sure that we don't loose the "interesting information" coming out of the sequencing projects ?

genomics sequencing • 2.5k views
ADD COMMENT
0
Entering edit mode

Thanks for fixing the typo , was working on something else while posting this hence overlooked it.

ADD REPLY
4
Entering edit mode
14.7 years ago
Nicojo ★ 1.1k

I agree that there is a bit of lag between the advancement of sequencing technologies (physico-chemical point of view) and the computational requirements to actually take advantage of these advances...

But is it really a problem?

  • All these technologies have strengths and weaknesses, so it's good that there are different technologies.
  • The processing of the data will eventually come around to a mature state.
  • The deluge of data is certainly not delved into yet, but would we even imagine how to do so if the data were not already available?
  • We are doing science, not selling cars: we need to be at the forefront of any and all advancements and not hold on to old vintage tech just because we "know" how to process it (or how to make money with it, e.g. old combustion vs. electrical engines).

As for your last question, about how to make sure we don't loose the "interesting information" coming out of the sequencing projects, well I'd say we aren't loosing anything since the raw data is well preserved in databanks! And concerning future discoveries from that data, you can't loose what you haven't found yet ;)

To sum up: advances are good, rat-race is good, lots of poorly understood data means we've got lots of work to do! And since I like what I do, I'm happy I won't have to go into another field of research ;)

Forward the Foundation!

ADD COMMENT
4
Entering edit mode
14.7 years ago
Mndoci ★ 1.2k

I don't understand why this is a problem either. You need to take that next step in data production and there is a lot of innovation going on there, as well as in the computational aspects of primary analysis. The downstream innovation will happen (and it is based on some work that I am aware of).

The best innovation comes from rat races. The market (in this case science, which fundamentally likes choice) will decide who gets to "win" and the algorithmic/analytics innovation will follow.

ADD COMMENT
3
Entering edit mode
14.7 years ago

I think every advance in technology creates new opportunities for those who know computation.

ADD COMMENT

Login before adding your answer.

Traffic: 2121 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6