Rabu, 11 Juni 2014

download materi Statistics 2



Introduction to Statistics
Lecture 2
Outline
Statistical Inference
Distributions & Densities
Normal Distribution
Sampling Distribution & Central Limit Theorem
Hypothesis Tests
P-values
Confidence Intervals
Two-Sample Inferences
Paired Data

Books & Software
But first…
For Thursday:
-      record statistical tests reported in any papers you have been reading.
-      need any help understanding the tests? We can discuss Thursday.
Statistical Inference
Statistical Inference – the process of drawing conclusions about a populationbased on information in a sample
Unlikely to see this published…
        “In our study of a new antihypertensive drug we found an effective 10% reduction in blood pressure for those on the new therapy. However, the effects seen are only specific to the subjects in our study. We cannot say this drug will work for hypertensive people in general”.

Describing a population
Characteristics of a population, e.g. the population mean m and the population standard deviation s are never known exactly
Sample characteristics, e.g.    and    are estimates of population characteristics m and s
A sample characteristic, e.g.    ,is called a statistic and a population characteristic, e.g. m  is called a parameter
Statistical Inference
Distributions
As sample size increases, histogram class widths can be narrowed such that the histogram eventually becomes a smooth curve
The population histogram of a random variable is referred to as the distribution of the random variable, i.e. it shows how the population is distributed across the number line
Density curve
A smooth curve representing a relative frequency distribution is called a density curve

The area under the density curve between any two points a and b is the proportion of values between aand b.
Sample Relative Frequency Distribution
Population Relative Frequency Distribution (Density)
Distribution Shapes
The Normal Distribution
The Normal distribution is considered to be the most important distribution in statistics

It occurs in “nature” from processes consisting of a very large number of elements acting in an additivemanner

However, it would be very difficult to use this argument to assume normality of your data
Later, we will see exactly why the Normal is so important in statistics
Normal Distribution (con’t)
Closely related is the log-normaldistribution, based on factors acting multiplicatively. This distribution is right-skewed.
Note: The logarithm of the data is thus normal.

The log-transformation of data is very common, mostly to eliminate skew in data
Properties of the Normal Distribution
The Normal distribution has a symmetric bell-shaped density curve
Characterised by two parameters, i.e. the mean  m, and standard deviation s
68% of data lie within 1s of the mean m
95% of data lie within 2s of the mean m
99.7% of data lie within 3s of the mean m

Normal curve
Standard Normal distribution
If X is a Normally distributed random variable with mean = m and standard deviation = s, then X can be converted to a Standard Normal random variable Z using:
Standard Normal distribution (contd.)
Z has mean = 0 and standard deviation = 1
Using this transformation, we can calculate areas under any normal distribution

Example
Assume the distribution of blood pressure is Normally distributed with  m = 80 mm and s = 10 mm
What percentage of people have blood pressure greater than 90?
Z score transformation:
                Z=(90 - 80) /10 = 1
           
Example (contd.)
The percentage greater than 90 is equivalent to the area under the Standard Normal curve greater then Z = 1.
From tables of the Standard Normal distribution, the area to the right of Z=1 is 0.1587 (or 15.87%)
How close is Sample Statistic to Population Parameter ?
Population parameters, e.g. m and s are fixed
Sample statistics, e.g.       vary from sample to sample
How close is     to m ?
Cannot answer question for a particular sample
Can answer if we can find out about the distribution that describes the variability in the random variable     
Central Limit Theorem (CLT)
Suppose you take any random sample from a population with mean μ and variance σ2

Then, for large sample sizes, the CLT states that the distribution of sample means is the Normal Distribution, with mean μ and variance σ2/n (i.e. standard deviation is σ/n )

If the original data is Normal then the sample means are Normal, irrespective of sample size

What is it really saying?
(1) It gives a relationship between the sample mean and population mean
This gives us a framework to extrapolate our sample results to the population (statistical inference);
(2) It doesn’t matter what the distribution of the original data is, the sample mean will always be Normally distributed when n is large.
This why the Normal is so central to statistics
Example: Toss 1, 2 or 10 dice (10,000 times)
Toss 1 dice
Histogram of
data
CLT cont’d
(3) It describes the distribution of the sample mean
The values of     obtained from repeatedly taking samples of size n describe a separate population
The distribution of any statistic is often called the sampling distribution
Sampling distribution of   
CLT continued
(4) The mean of the sampling distribution of      is equal to the population mean, i.e.

(5) Standard deviation of the sampling distribution of      is the population standard deviation ¸square root of sample size, i.e.

Estimates
Since s is an estimate of s, an estimate of                is
 
This is known as the standard errot of the mean

Be careful not to confuse the standard deviation and the standard error !
Standard deviation describes the variability of the data
Standard error is the measure of the precision of
   as a measure of 
m
Sampling distribution of    for a Normal population)
Sampling dist. of     for a non-Normal population
Confidence Interval
A confidence interval for a population characteristic is an interval of plausible values for the characteristic. It is constructed so that, with a chosen degree of confidence (the confidence level), the value of the characteristic will be captured inside the interval
E.g. we claim with 95% confidence that the population mean lies between 15.6 and 17.2
Methods for Statistical Inference


Confidence Intervals

Hypothesis Tests

Confidence Interval for m when s is known
A 95% confidence interval for m if s is known is given by:

Sampling distribution of 
Rationale for Confidence Interval
From the sampling distribution of     conclude that m and    are within 1.96 standard errors (    ) of each other 95% of the time
Otherwise stated, 95% of the intervals contain m 
So, the interval                      can be taken as an interval that typically would include m
Example
A random sample of 80 tablets had an average potency of 15mg. Assume s is known to be 4mg.
   =15, s =4, n=80
A 95% confidence interval for m is


                = (14.12 , 15.88)


Confidence Interval for m when s is unknown
Nearly always s is unknown and is estimated using sample standard deviation s
The value 1.96 in the confidence interval is replaced by a new quantity, i.e., t0.025
The 95% confidence interval when s is unknown is:

             
Student’s t Distribution
Closely related to the standard normal distribution Z
Symmetric and bell-shaped
Has mean = 0 but has a larger standard deviation
Exact shape depends on a parameter called degrees of freedom (df) which is related to sample size
In this context df = n-1

Student’s t distribution for 3, 10 df and standard Normal distribution
Definition of t0.025 values
Example
26 measurements of the potency of a single batch of tablets in mg per tablet are as follows
Example (contd.)
                                            mg per tablet
t0.025 with df = 25 is 2.06



So, the batch potency lies between 485.74 and 494.45 mg per tablet
General Form of Confidence Interval



Estimate ±(critical value from distribution).(standard error)
Hypothesis testing
Used to investigate the validity of a claim about the value of a population characteristic
For example, the mean potency of a batch of tablets is 500mg per tablet, i.e.,
            0 = 500mg
Procedure
Specify Null and Alternative hypotheses
Specify test statistic
Define what constitutes an exceptional outcome
Calculate test statistic and determine whether or not to reject the Null Hypothesis

Step 1
Specify the hypothesis to be tested and the alternative that will be decided upon if this is rejected
The hypothesis to be tested is referred to as the Null Hypothesis (labelled H0)
The alternative hypothesis is labelled H1
For the earlier example this gives:
 
Step 1 (continued)
The Null Hypothesis is assumed to be true unless the data clearly demonstrate otherwise
Step 2
Specify a test statistic which will be used to measure departure from 
            where     is the value specified under the Null Hypothesis, e.g.              in the earlier example.
For hypothesis tests on sample means the test statistic is:
Step 2 (contd.)
The test statistic


            is a ‘signal to noise ratio’, i.e. it measures how far     is from     in terms of standard error units
The t distribution with df = n-1 describes the distribution of the test statistics if the Null Hypothesis is true
In the earlier example, the test statistic thas a t distribution with df = 25 
Step 3
Define what will be an exceptional outcome
a value of the test statistic is exceptional if it has only a small chance of occurring when the null hypothesis is true
The probability chosen to define an exceptional outcome is called the significance level of the test and is labelled  a
Conventionally, a is chosen to be = 0.05
Step 3 (contd.)
a = 0.05 gives cut-off values on the sampling distribution of t called critical values
values of the test statistic t lying beyond the critical values lead to rejection of the null hypothesis
For the earlier example the critical value for a t distribution with df = 25 is 2.06 
t distribution with df=25 showing critical region
Step 4
Calculate the test statistic and see if it lies in the critical region
For the example




t = -4.683 is < -2.06 so the hypothesis that the batch potency is 500 mg/tablet is rejected

P value
Example (contd)
P value = probability of observing a more extreme value of t
The observed t value was -4.683, so the P value is the probability of getting a value more extreme than ± 4.683
This P value is calculated as the area under the t distribution below -4.683 plus the area above 4.683, i.e., 0.00008474 !
Example (contd)
Less than 1 in 10,000 chance of observing a value of t more extreme than -4.683 if the Null Hypothesis is true 
Evidence in favour of the alternative hypothesis is very strong
P value (contd.)
Two-tail and One-tail tests
The test described in the previous example is a two-tail test
The null hypothesis is rejected if either an unusually large or unusually small value of the test statistic is obtained, i.e. the rejection region is divided between the two tails
One-tail tests
Reject the null hypothesis only if the observed value of the test statistic is
Too large
Too small
In both cases the critical region is entirely in one tail so the tests are one-tail tests
Statistical versus Practical Significance
When we reject a null hypothesis it is usual to say the result is statistically significant at the chosen level of significance
But should also always consider the practical significance of the magnitude of the difference between the estimate (of the population characteristic) and what the null hypothesis states that to be
After the break


0 komentar:

Posting Komentar

Posting Kami