For a normal population, the following equation can be used to convert them to a standard normal distribution
if, X ~ N(μ, σ^2).
When we apply hypothesis testing between a normal sample population and a normal population with a konwn σ^2, we use following equation to compute U or z:
The question here is how to explain the difference in the denominator and why the latter is appropriate here. I noted some website (such as statisticshowto) explain this as following:
You always divide by sqrt(n). However, occasionally the square root of n sometimes equals 1 (making it just σ in the denominator. for example, if you are choosing one person and trying to figure out the probability their weight is under x pounds, then n=1. In other words, if you are calculating a z-score, you can always use sqrt(n). (https://www.statisticshowto.com/sigma-sqrt-n-used/)
But this looks like just a few handy tips to remember, right?
I would like to know the correct understanding of this issue, it would be better if it is more general and also rigorous.
Thank your for your attentions.
zhang
Hi @Jean-Karim Heriche,
Thanks for your reply, my English is not well, so thank you for understanding what I mean also.
The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution. so in fact, the standard deviation of X-bar is standard error of X(i) or samples. Is it right?In the latter expression, we are actually transforming the mean of the sample, so we need the standard error of the sample (σ/sqrt(n)) as the standard deviation of the mean (X-bar), in fact it is σ(X-bar) = σ/sqrt(n).
Yes I think you got it.
Thank you for your reply.