Professional Documents
Culture Documents
The Binomial distribution is a discrete distribution expressing the probability of a set of dichotomous
alternative i.e., success or failure.
Bernoulli Distribution:
A random variable X which takes two values 0 and 1 with probabilities q and p i.e., P (x = 1) = p and P
(x = 0) = q, q = 1 p, is called a Bernoulli variate and is said to be a Bernoulli Distribution, where p and
q takes the probabilities for success and failure respectively. It is discovered by Swiss Mathematician
James Bernoulli (1654-1705).
Binomial distribution is a discrete distribution in which the random variable X (the number of
success) assumes the values 0,1, 2, .n, where n is finite.
POISSON DISTRIBUTION:
Poisson distribution was discovered by a French Mathematician-cum-Physicist
Simeon Denis Poisson in 1837. Poisson distribution is also a discrete distribution. He derived
it as a limiting case of Binomial distribution. For n-trials the binomial distribution is (q + p)n ;
the probability of x successes is given by P(X = x) = nCx px qn-x . If the number of trials n is
very large and the probability of success p is very small so that the product np = m is non
negative and finite.
Null Hypothesis:
A hypothesis of no difference is called null hypothesis and is usually denoted by H0
Null hypothesis is the hypothesis which is tested for possible rejection under the
assumption that it is true by Prof. R.A. Fisher. It is very useful tool in test of significance.
For example: If we want to find out whether the special classes (for Hr. Sec. Students) after
school hours has benefited the students or not. We shall set up a null hypothesis that H0:
special classes after school hours has not benefited the students.
lternative Hypothesis:
Any hypothesis, which is complementary to the null hypothesis, is called an
alternative hypothesis, usually denoted by H1, For example, if we want to test the null
hypothesis that the population has a specified mean 0 (say),
Level of significance:
In testing a given hypothesis, the maximum probability with which we would be
willing to take risk is called level of significance of the test. This probability often denoted by
is generally specified before samples are drawn.
6.4.1 Definition:
The Chi- square (2) test (Chi-pronounced as ki) is one of the simplest and most
widely used non-parametric tests in statistical work. The 2 test was first used by Karl Pearson
in the year 1900. The quantity 2 describes the magnitude of the discrepancy between theory
and observation. It is defined as
Where O refers to the observed frequencies and E refers to the expected frequencies.
ANOVA
Definition:
According to R.A. Fisher , Analysis of Variance (ANOVA) is the Separation of
Variance ascribable to one group of causes from the variance ascribable to other group. By
this technique the total variation in the sample data is expressed as the sum of its nonnegative
components where each of these components is a measure of the variation due to some
specific independent source or factor or cause.
Assumptions:
For the validity of the F-test in ANOVA the following assumptions are made.
(i) The observations are independent
(ii) Parent population from which observations are taken is normal and
(iii) Various treatment and environmental effects are additive in nature