Help with building bioinformatics workstation
3
2
Entering edit mode
7.3 years ago
odoluca ▴ 20

My work focus on analysis of genomes (including human) to discover new motifs and clustering them. The clustering step alone takes a huge amount processing power. I estimate that clustering step, alone, will take approximately 2 months using my current computer (all 4 cores loaded, i5 4670K@4.1Ghz with 8 GB Ram).

Because it is such a long time to lock my computer down I couldnt actually complete it yet. I will also need more RAM but I can not estimate the peak memory requirement until I actually complete it.

Fortunately I recieved approx. 4K dollars of funding for building a workstation. And that is why I need some help.

I rather go with faster and multiple core cpus and the new AMD chip 1950X is within an acceptible price range. Also supports 8 quad-channel Ram sticks with up to 3600 MHz speed.

However, I hear so much about ECC (Error Correcting Code) Ram and how it is essential for workstations. On the other hand, unfortunately, ECC Rams are more expensive and slower than non-ECC Ram. There are UDIMM ECCs and LRDIMM ECCs both with only up to 2133 MHz speed and only dual-channel capability (at least according to what I read). 1950X will support only UDIMM ECCs according to the manufacturer and max ECC UDIMM I can get my hands on is 16 GB versions.

If I insist to go with ECC, then (1) either I have to get ECC UDIMMs at 2133 MHz speed with dual channel capacity and use it with 1950X.

(2) Or I can decide to get a server CPU such as Intel E5-2630v4 or AMD EPYC 7301, which are in my price range. However, according to the product details the aggregate CPU frequency (cpu frequency x core count) of these cpus will be significantly lower than 1950X.

(E5-2630v4= 20coresx3.1Ghz vs Epyc 7301= 32x2.7GHz vs 1950X= 32x4GHz)

On the other hand, a dual-socket server motherboard will have the potential for upgrade with a second CPU and additional Ram sticks in the future.

(3) If I choose to let go of ECC, then I can get Ram sticks with more capacity and higher speed.

So my questions are,

Is ECC so crucial for type of bioinformatics I do? If so which route should I take (1), or (2)...or any other suggestions?

hardware ECC Ram workstation computer • 7.1k views
ADD COMMENT
1
Entering edit mode

I doubt RAM speed will be a significant bottleneck. Even in gaming workloads etc RAM speed makes extremely minimal differences. I'd prioritise capacity and probably ECC over speed. Basically all server memory is ECC, for good reason.

Before you decide on your CPU choice as well, assess how well multithreaded your process is. If there are parts where youre reduced to a single core, then clock speed will be a higher priority and aggregate cycles may not count for much. If its very well multithreaded, then sharing the load will definitely speed the job up.

ADD REPLY
4
Entering edit mode
7.3 years ago

I estimate that clustering step, alone, will take approximately 2 months using my current computer (all 4 cores loaded, i5 4670K@4.1Ghz with 8 GB Ram).

Sounds like you have plenty of time to make the algorithm faster, then... :)

Looks like Threadripper (1950X) motherboards can have 8 RAM slots, so that's 128 GB (which is the amount we have on most of our compute nodes). And you probably want to populate all 8 channels for max performance, though that's hard to say until reviews are out.

I can't really say much about ECC support in general since I never know if some computer problem I have is related to memory errors (and all of the heavy-duty computing I run is on ECC machines anyway). However, it would really be disappointing for a program to crash after 20 days due to a ram error... or worse yet, silently give you the wrong answer. I think, in this situation, I would go with ECC RAM since you will be doing long-running jobs involving large amounts of memory, and the chance of a memory error causing a problem is proportional to (amount of memory used)*(length of job). Try to get single-rank memory modules; those seem to work better with Ryzen.

I'm really not sure about the exact difference between Threadripper and EPYC; I think EPYC supports some extra features that you won't use. In this case I don't see much reason to get an EPYC. Note that EPYC 7301 is actually 2.2GHz, not 2.7 (that's it's peak turbo which won't be used when all cores are running) so everything would take 50% longer. There are also some 24-core EPYCs, though, which seem more interesting. Substantially more expensive than Threadripper for slightly less performance, but leaves open the option of 2-socket. I think the main reason professional CPUs have lower clocks than consumer CPUs is simply to maximize flops/watt, as power dissipation is much more important in rack-mounted computers than standalone workstations, and power use scales nonlinearly with clock-rate so lower clocks are more efficient. Here's a nice picture:

https://i.stack.imgur.com/daciI.png

It's funny that AMD is positioning Threadripper for gaming/enthusiasts, because in my opinion it's really a much better fit in bioinformatics!

ADD COMMENT
1
Entering edit mode
7.2 years ago
BenHarrison ▴ 10

It is my understanding that AMD threadripper platform accepts 16Gig max DIMM size. So, you could potentially go with 16X8 = 128Gig total. I would suggest that ECC is the way to go for bioinformatics.

I'm interested, did you manage to find 16Gig ECC DIMMs that play well with threadripper? I would like to build a similar workstation.

ADD COMMENT
0
Entering edit mode

I have ordered 8 kvr24e17d8/16 16 gb dimms and waiting. I ll ket you know how it turns out.

ADD REPLY
1
Entering edit mode
7.2 years ago
LLTommy ★ 1.2k

Besides the things everybody already said. (Can you make your algorithm better? Is the bottleneck really the hardware?). There is a point where a workstation is just not strong enough and where you have to think about grid/cloud/high performance cluster type of approach. And while not so long ago, it was hard to get access to a cluster, times changed. It is easy to rent computing power from the known vendors - and it is interesting.

ADD COMMENT
0
Entering edit mode

It turns out you were right. I was using an external python module but I rewrote the algorithm and used numba.jit to speed it up by ten fold.

ADD REPLY
0
Entering edit mode

About cloud computing-It is still a confusing subject for me. Such as How to set it up. Also predicting my requirements such as ram per processor can be so off-target....

ADD REPLY
1
Entering edit mode

Well, cloud computing sounds so fancy... but it's not that hard (at least to start with it). If you don't have access to a HPC computer facility at your university or something, you can look into amazon aws, google cloud and microsoft azure. (I did not want to mention the companies earlier, but well, here you go) .... search for these things and you'll find information and tutorials how to run a program on their infrastructure. The idea behind all these services is that you just rent infrastructure and by that I mean, you rent as much as you need for exactly the time you need it - which is pretty awesome. You rent a machine with - let's say for the sake of the argument, 16 CPU cores and 120 GB RAM for 30 minutes (or a lot more if you'd need it). This enables pretty much everybody to access very powerful infrastructure at very reasonable prices. Thus, if you really need lots of power (and don't have access via a university or something), it's the way to go if you really need lots of power (and by that I mean, more than even your expensive laptop can provide). So definitely worth looking into, if you need it!

You efforts so far of course have not been wrong, I mean, it makes sense to test your algorithm on your machine with less data, make sure everything works, before you throw the big guns at the problem. And yes, it's great that your first try to improve the code itself and not just try to make a slow, potentially shitty, algorithm work by wasting lots of computational power! So good work.

ADD REPLY
0
Entering edit mode

That why you should test with small sets and increment by an order of magnitude. Hopefully you know your algorithm best so may be able to extrapolate the times needed as data gets larger.

ADD REPLY

Login before adding your answer.

Traffic: 1841 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6