Statistical inference is the process of deducing properties of an underlying distribution by analysis of data.
One of the main goals of statistics is to estimate unknown parameters. To approximate these parameters, we choose an estimator, which is simply any function of randomly sampled observations. To illustrate this idea, we will estimate the value of \( \pi \) by uniformly dropping samples on a square containing an inscribed circle. We define the estimator \( \hat{\pi} \) below, where \( m \) is the number of samples within our circle and \( n \) is the total number of samples dropped. It can be shown that this estimator has the desirable property of being unbiased.
\(\hat{\pi} = 4\dfrac{m}{n}\) |
\( m= \) 0.00 \( n= \) 0.00 |
\( \hat{\pi}= \) |
In contrast to point estimators, confidence intervals estimate a parameter by specifying a range of possible values. Such an interval is associated with a confidence level, which is the probability that the procedure used to generate the interval will produce an interval containing the true parameter.
Choose a probability distribution to sample from.
Choose a sample size \((n)\) and confidence level \((1-\alpha)\).
Start sampling to generate confidence intervals.
Bootstrapping is a technique that relies on uniform sampling with replacement from a random sample. It can be used...(still under construction)
Choose a probability distribution to sample from.
Choose a sample size \((n)\) and sample from your chosen distribution.
Resample to generate the emperical distribution.