The short answer is that your naive computation is incorrect because it doesn't take into account different sequencing depths / library sizes between the different samples.
It isn't generally true that the edgeR logFC are consistently different from the naive computution by a constant offset, that's just an accident of your data or of how you did the calculation.
edgeR estimates the logFC by a negative binomial generalized linear model, which is described in detail in the published papers. The model takes into account the counts, the effective library sizes and dispersions and also applies some logFC shrinkage. In the simplest case of a oneway layout, constant effective library sizes and no shrinkage, the logFC returned by glmFit would be exactly the same as the log2 ratio of the mean counts. In general, though, you cannot reproduce the edgeR calculation by a simple formula.
Here is a toy example where the edgeR logFC agrees exactly with the naive formula.
In this example, I have set all the library sizes to be the same (equal to 1 million).
I have also set prior.count=0
so as to turn off the logFC shrinkage:
> group <- factor(c("A","A","B","B"))
> design <- model.matrix(~group)
> counts <- matrix(c(1,1,8,8),1,4)
> fit <- glmFit(counts,design,dispersion=0,prior.count=0,lib.size=rep(1e6,4))
> lrt <- glmLRT(fit)
> topTags(lrt)
Coefficient: groupB
logFC logCPM LR PValue FDR
1 3 2.700434 12.39534 0.0004304059 0.0004304059
Here logFC = log2(8/1) = 3
. In general, though, the library sizes are very unlikely to be identical for every sample, in which case the naive formula really is too naive and edgeR will give a different result.