GPU implementation Tesla Kepler accelerator?
2
1
Entering edit mode
9.9 years ago

I want to buy a GPU in order to obtain more computing power. This is not such an easy thing here in Argentina since I need to import it, which goes with about 2 kg of paperwork, and since funding is limited I want to do it right. Thus far I came to the option of buying a single Kepler K40 GPU accelerator and connecting it to existing infrastructure that my neighbours at the physics department have. Is there anything else I should consider? I know I would need a free PCI express generation 3 x16 slot and memory to accompany it but that is about it.

GPU Server • 2.4k views
ADD COMMENT
0
Entering edit mode

+1 Lukas answer. So far the only application that I have come across that really needs these is cryo EM. That said, there are other applications, see https://biomanycores.org

ADD REPLY
4
Entering edit mode
9.9 years ago
Lukas Pravda ▴ 80

At first, you need to ask yourself a question. What is the main purpose of use? Since the GPU itself does not guarantee better performance. You need to have specialized tools which can utilize the GPU and can get the most of it. Which software are you planning to use?

I assume that you would like to build a powerful computational cluster. There are additional aspects you have to bear in mind, but they are rather technical. Such as sufficient power source - these GPUs are really greedy, reasonable CPU (not all the calculations are done with GPU, cooling... If you are unexperienced with these topics I would strongly encourage you to ask someone who is, perhaps someone from you IT department or IT faculty? Before spending your own budget, are you aware of NVIDIA academic program? They are donating really powerful pieces of hardware free of charge for academic, non-profit purposes.

ADD COMMENT
1
Entering edit mode

Indeed the idea is to implement it into an existing cluster of the physics department, they have a technician in charge. Presume he will take care of the technical details. I intend to use it with existing software that is well supported for CUDA such as PHYML and MrBayes (via Beagle), MAFFT and AMBER.

I did not know NVIDIA had that program, great tip! Thanks!

Arjen

ADD REPLY
0
Entering edit mode

It is obvious then :). Ask the technician and write a gpu proposal. They evaluate them every 2 weeks, so if you will be successful you will have a device in your lab in less than a month. Which is I guess is the equal time to fill in 2kg of paperwork.

ADD REPLY
0
Entering edit mode
9.9 years ago

Thanks for all the comments. As a follow up question something that I cannot get clear: Memory and communication speed. So the third generation Tesla GPUs (Kepler) are equipped with on board memory. The K40 has like 12Gb, the K20 has 5Gb. In my experience, memory is always limiting but I have no clue how the GPUs use memory. Is the on board memory like cache memory? If I put like 32 gb additional memory on the mother, would that be the point where the speed of data transfer via the PCIe becomes important? Or is that just to communicate with the CPU?

Important question since the K20 appears NOT to function with PCI express gen3 and I likely will buy a K40 or a K20 (depending on....)

ADD COMMENT

Login before adding your answer.

Traffic: 767 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6