NGS technologies have several strong hardware implications at several steps of the entire process that have to be consider as a whole when building everything from scratch. usually a group tends to reuse already available machinery and networking, so optimizing the existing resources and appropriately evaluating future needs is mandatory to end up building the best NGS system for you.
you will have to consider that the sequencer itself will generate the raw data onto a computer (or little cluster, like the one SOLiD gives you attached to the sequencer) that actually controls it. these raw data is usually processed on a different computer, since the original one is focused on controlling the sequencing process and handling the raw data, so if you want to have your sequencer up and running as much as possible you wouldn't want to collapse its controlling computer with mapping or any other secondary/tertiary analysis.
the first issue that arises here is moving the data out from that machine to another place where you would store and analyze it. data sizes are definitely an issue, so the connection between the sequencer computer and the analysis machine should be as best as possible. a gigabit connection, as mentioned here, would be advisable, although if you aren't able to upgrade your network or if the line to your analysis machine goes through paths which you may not be able to control, you will have to calculate transfer times considering that you will have to move typically a few hundreds of GB out of the sequencer.
when you come to store data, you will also have to decide what to store and what to leave behind. for instance, it was hard for us to decide forgetting about raw images, but when we calculated the storing costs of those images we saw that it was cheaper to repeat the experiment rather than storing the images for a long time. take into account that if you don't store the raw images (typically a few TB of size) you not only save storing capacity, but also data transfer time.
once you have solved those basic issues (until now you only have unprocessed raw data) you will have to start thinking about mapping that data, which is typically a high resource demanding step. since it is very parallelizable concept, mappers do usually allow multicore awareness at least, and some of them are in fact able to be installed on supercomputers where job queues may be used. you will then have to decide which program or programs you want to use, and then think about the machine they will demand. again, typically you will end up requiring a little (or large, depending on your needs) multinode computer, where you should be able to perform the mapping step. from then on you will probably use the same cluster to perform further upstream analysis (i.e. variant calling).
as an example case I will give you some numbers we humbly deal with at our lab. as I've mentioned, we have a SOLiD machine that came with a "little cluster" attached, made of 1 head node and 3 computing nodes (8 cores and 16GB of memory each), with a shared storage of 10TB (online cluster). we are connected through a gigabit line to a local supercomputing center, so we build up there a customized cluster made of 1 head node and 5 computing nodes (8 cores and 24GB of memory each), with a shared storage of 5TB (offline cluster). our standard workflow generates a few hundreds of GB in .csfasta and .qual files, which we move from the online to the offline cluster in a couple of hours, and then we start mapping and calling for SNPs and small indels. this generates a few GB of results in BAM files and variants lists, which we access differently: we leave the BAM files on the remote cluster for archiving and visualization purposes only (launching IGV locally and pointing to the remote BAM files works perfectly), and we do the main research effort using the variants which represent a few MB only.
this is in fact a very useful suggestion. since BAM files can be reverse-engineered (using SamToFastq for instance) keeping optimized and non redundant files should be mandatory. our experience tells us that doing so reduces storing needs down to one third of the raw csfasta+qual files we get from our SOLiD machine.
This is only true for Fastq, and e.g. SFF files from 454 contain additional data. And many tools do not understand BAM directly, but needs Fasta or Fastq.
YMMV. It works well if you're using HiSeqs; both the WTSI and Broad use this approach and dynamically convert to Fastq for those apps that need it.