You are on page 1of 3

ELEMENTARY STATISTICS, 5/E

Neil A. Weiss

FORMULAS

NOTATION The following notation is used on this card: CHAPTER 5 Probability and Random Variables
n  sample size σ  population stdev • Probability for equally likely outcomes:
x  sample mean d  paired difference f
P (E)  ,
s  sample stdev p̂  sample proportion N
Qj  j th quartile p  population proportion where f denotes the number of ways event E can occur and
N  population size O  observed frequency N denotes the total number of outcomes possible.
µ  population mean E  expected frequency • Special addition rule:
P (A or B or C or · · · )  P (A) + P (B) + P (C) + · · ·
CHAPTER 3 Descriptive Measures (A, B, C, . . . mutually exclusive)
x
• Sample mean: x  • Complementation rule: P (E)  1 − P (not E)
n
• Range: Range  Max − Min • General addition rule: P (A or B)  P (A) + P (B) − P (A & B)

• Sample standard deviation: • Mean of a discrete random variable X: µ  xP (X  x)


 
• Standard deviation of a discrete random variable X:
(x − x)2 x 2 − (x)2 /n  
s or s
n−1 n−1 σ  (x − µ)2 P (X  x) or σ  x 2 P (X  x) − µ2
• Interquartile range: IQR  Q3 − Q1 • Factorial: k!  k(k − 1) · · · 2 · 1
 
• Lower limit  Q1 − 1.5 · IQR, Upper limit  Q3 + 1.5 · IQR n n!
• Binomial coefficient: 
x x x! (n − x)!
• Population mean (mean of a variable): µ 
N • Binomial probability formula:
 
• Population standard deviation (standard deviation of a variable): n x
  P (X  x)  p (1 − p)n−x ,
(x − µ)2 x 2 x
σ  or σ  − µ2
N N where n denotes the number of trials and p denotes the success
x−µ probability.
• Standardized variable: z 
σ • Mean of a binomial random variable: µ  np

CHAPTER 4 Descriptive Methods in Regression and Correlation • Standard deviation of a binomial random variable: σ  np(1 − p)
• Sxx , Sxy , and Syy :
CHAPTER 7 The Sampling Distribution of the Sample Mean
Sxx  (x − x)2  x 2 − (x)2 /n
• Mean of the variable x: µx  µ
Sxy  (x − x)(y − y)  xy − (x)(y)/n √
• Standard deviation of the variable x: σx  σ/ n
Syy  (y − y)2  y 2 − (y)2 /n
CHAPTER 8 Confidence Intervals for One Population Mean
• Regression equation: ŷ  b0 + b1 x, where
Sxy 1 • Standardized version of the variable x:
b1  and b0  (y − b1 x)  y − b1 x x−µ
Sxx n z √
σ/ n
• Total sum of squares: SST  (y − y)2  Syy
• z-interval for µ (σ known, normal population or large sample):
• Regression sum of squares: SSR  (ŷ − y)2  Sxy
2
/Sxx σ
x ± zα/2 · √
• Error sum of squares: SSE  (y − ŷ)2  Syy − Sxy
2
/Sxx n

• Regression identity: SST  SSR + SSE σ


• Margin of error for the estimate of µ: E  zα/2 · √
n
SSR
• Coefficient of determination: r 2  • Sample size for estimating µ:
SST
 2
• Linear correlation coefficient: zα/2 · σ
n ,
1
(x − x)(y − y) Sxy E
r n−1
or r 
sx sy Sxx Syy rounded up to the nearest whole number.
ELEMENTARY STATISTICS, 5/E
Neil A. Weiss

FORMULAS

• Studentized version of the variable x: • Nonpooled t-interval for µ1 − µ2 (independent samples, and normal
x−µ populations or large samples):
t √

s/ n
(x 1 − x 2 ) ± tα/2 · (s12 /n1 ) + (s22 /n2 )
• t-interval for µ (σ unknown, normal population or large sample):
with df  .
s
x ± tα/2 · √
n • Paired t-test statistic for H0 : µ1  µ2 (paired sample, and normal
with df  n − 1. differences or large sample):

d
CHAPTER 9 Hypothesis Tests for One Population Mean t √
sd / n
• z-test statistic for H0 : µ  µ0 (σ known, normal population or large with df  n − 1.
sample):
x − µ0 • Paired t-interval for µ1 − µ2 (paired sample, and normal differences
z √ or large sample):
σ/ n
sd
• t-test statistic for H0 : µ  µ0 (σ unknown, normal population or d ± tα/2 · √
n
large sample):
with df  n − 1.
x − µ0
t √
s/ n
CHAPTER 11 Inferences for Population Proportions
with df  n − 1.
• Sample proportion:
CHAPTER 10 Inferences for Two Population Means x
p̂  ,
n
• Pooled sample standard deviation:
 where x denotes the number of members in the sample that have the
(n1 − 1)s12 + (n2 − 1)s22 specified attribute.
sp 
n1 + n2 − 2
• One-sample z-interval for p:
• Pooled t-test statistic for H0 : µ1  µ2 (independent samples, normal 
p̂ ± zα/2 · p̂(1 − p̂)/n
populations or large samples, and equal population standard
deviations): (Assumption: both x and n − x are 5 or greater)
x1 − x2
t √ • Margin of error for the estimate of p:
sp (1/n1 ) + (1/n2 ) 
E  zα/2 · p̂(1 − p̂)/n
with df  n1 + n2 − 2.
• Pooled t-interval for µ1 − µ2 (independent samples, normal • Sample size for estimating p:
populations or large samples, and equal population standard    2
zα/2 2 zα/2
deviations): n  0.25 or n  p̂g (1 − p̂g )
 E E
(x 1 − x 2 ) ± tα/2 · sp (1/n1 ) + (1/n2 ) rounded up to the nearest whole number (g  “educated guess”)
with df  n1 + n2 − 2.
• One-sample z-test statistic for H0 : p  p0 :
• Degrees of freedom for nonpooled-t procedures: p̂ − p0
 2    2 z √
s /n1 + s22 /n2 p0 (1 − p0 )/n
   1 2  2 2 ,
s12 /n1 s2 /n2 (Assumption: both np0 and n(1 − p0 ) are 5 or greater)
+
n1 − 1 n2 − 1 x1 + x2
• Pooled sample proportion: p̂p 
rounded down to the nearest integer. n1 + n2

• Nonpooled t-test statistic for H0 : µ1  µ2 (independent samples, • Two-sample z-test statistic for H0 : p1  p2 :
and normal populations or large samples): p̂1 − p̂2
z  √
x1 − x2 p̂p (1 − p̂p ) (1/n1 ) + (1/n2 )
t 
(s12 /n1 ) + (s22 /n2 )
(Assumptions: independent samples; x1 , n1 − x1 , x2 , n2 − x2 are all 5
with df  . or greater)
ELEMENTARY STATISTICS, 5/E
Neil A. Weiss

FORMULAS

• Two-sample z-interval for p1 − p2 : • One-way ANOVA identity: SST  SSTR + SSE



(p̂1 − p̂2 ) ± zα/2 · p̂1 (1 − p̂1 )/n1 + p̂2 (1 − p̂2 )/n2 • Computing formulas for sums of squares in one-way ANOVA:
(Assumptions: independent samples; x1 , n1 − x1 , x2 , n2 − x2 are all 5
SST  x 2 − (x)2 /n
or greater)
SSTR  (Tj2 /nj ) − (x)2 /n
• Margin of error for the estimate of p1 − p2 :
 SSE  SST − SSTR
E  zα/2 · p̂1 (1 − p̂1 )/n1 + p̂2 (1 − p̂2 )/n2
• Mean squares in one-way ANOVA:
• Sample size for estimating p1 − p2 :
 2 SSTR SSE
zα/2 MSTR  , MSE 
n1  n2  0.5 k−1 n−k
E
or • Test statistic for one-way ANOVA (independent samples, normal
 z 2 populations, and equal population standard deviations):
α/2
n1  n2  p̂1g (1 − p̂1g ) + p̂2g (1 − p̂2g )
E MSTR
F 
rounded up to the nearest whole number (g  “educated guess”) MSE
with df  (k − 1, n − k).
CHAPTER 12 Chi-Square Procedures
• Expected frequencies for a chi-square goodness-of-fit test: CHAPTER 14 Inferential Methods in Regression and Correlation

E  np • Population regression equation: y  β0 + β1 x



• Test statistic for a chi-square goodness-of-fit test: SSE
• Standard error of the estimate: se 
n−2
χ 2  (O − E)2 /E
• Test statistic for H0 : β1  0:
with df  k − 1, where k is the number of possible values for the
variable under consideration. b1
t √
se / Sxx
• Expected frequencies for a chi-square independence test:
with df  n − 2.
R·C
E
n • Confidence interval for β1 :
where R  row total and C  column total. se
b1 ± tα/2 · √
• Test statistic for a chi-square independence test: Sxx

χ 2  (O − E)2 /E with df  n − 2.

with df  (r − 1)(c − 1), where r and c are the number of possible • Confidence interval for the conditional mean of the response variable
values for the two variables under consideration. corresponding to xp :

CHAPTER 13 Analysis of Variance (ANOVA) 1 (xp − x/n)2
ŷp ± tα/2 · se +
n Sxx
• Notation in one-way ANOVA:
with df  n − 2.
k  number of populations
n  total number of observations • Prediction interval for an observed value of the response variable
x  mean of all n observations corresponding to xp :

nj  size of sample from Population j 1 (xp − x/n)2
x j  mean of sample from Population j ŷp ± tα/2 · se 1 + +
n Sxx
sj2  variance of sample from Population j
with df  n − 2.
Tj  sum of sample data from Population j
• Test statistic for H0 : ρ  0:
• Defining formulas for sums of squares in one-way ANOVA:
r
t 
SST  (x − x)2 1 − r2
SSTR  nj (x j − x) 2
n−2
SSE  (nj − 1)sj2 with df  n − 2.

You might also like