That algorithm is nonsense. First off, they claim to compress arbitrary DNA into no more than 1.58 bits per base, which is of course impossible. And if you read their paper, you'll find that different DNA sequences give rise to the same code. You're much better off just using gzip.
you need two bits to represent a,c,g,t. however, runs of the same 2-bit representation of the same nucleotide can be further compressed with block compression or BW.
Right, but they're claiming 1.58 bits in the "worst-case" scenario, which is rubbish. Perhaps that was its performance on the worst one they happened to test, but there have to be input cases that have a minimum of 2 bits per base.
well - frankly - when the sample implementation does not work I usually move on
Have you tried contacting the authors?
Well, it seems to be the only possibility left. So I'll try that.