Obviously BLAST is going to have better on-average performance on a smaller database than on a larger database. But for a particular (large) database (let's say 10s of GB), how does splitting up the database into smaller files affect performance of subsequent BLAST searches? The default is to split the database into files no larger than 1GB, but is performance significantly affected if I decides to split the same database into chunks of 0.5GB or 5GB? It doesn't seem like this would make much difference...
...which leads to the second part of my question: what is the purpose of the max_file_sz
option? Does file size indeed effect performance? Or is this perhaps a holdover from the days when 32-bit architectures placed constraints on file size?