Formulae Calculus

Published on June 2016 | Categories: Types, Instruction manuals | Downloads: 31 | Comments: 0 | Views: 273
of 4
Download PDF   Embed   Report

Calculus forumlae sheet for solvind differential equations and signal problems. Helps a lot for examinations.

Comments

Content

Formulae

• Addition rule: P (A ∪ B) = P (A) + P (B) − P (A ∩ B).
P (A∩B)
P (B)

• Conditional Probability: P (A|B) =

provided P (B) > 0.

• Independence: A and B are independent if P (A ∩ B) = P (A) × P (B).
• Discrete random variable: µx =

P
x

• Continuous random variable: µx =

xp(x), σx2 =

R∞
−∞

P
x

x2 p(x) − µ2x .

xf (x)dx, σx2 =

R∞
−∞

x2 f (x) − µ2x .

• If X1 , . . . , Xn are random variables, the mean of any linear combination is given by
µc1 X1 +···+cn Xn = c1 µX1 + · · · + cn µXn .
• If X1 , . . . , Xn are independent random variables, the variance of any linear combination is given by
2
2
σc21 X1 +···+cn Xn = c21 σX
+ · · · + c2n σX
.
n
1

• Binomial distribution: p(x) =
special case with n = 1.

n
x



px (1 − p)n−x ; x = 0, 1, . . . , n. µx = np, σx2 = np(1 − p). Bernoulli is a

• Poisson distribution: p(x) = e−λ λx /x!; x = 0, 1, . . .. µx = λ, σx2 = λ.
• Hypergeometric: p(x) =

N −R
(R
x )( n−x )
, µx =
N
(n)

nR
,
N

σx2 =

nR
(1
N



R N −n
)
.
N N −1

• Geometric: p(x) = (1 − p)x−1 p; x = 1, 2, . . .. µx = p1 , σx2 =
• Negative binomial: p(x) =

x−1
r−1



1−p
.
p2

pr (1 − p)x−r ; x = r, r + 1, . . .. µx = pr , σx2 =

• Normal distribution: If X ∼ N (µ, σ 2 ), then Z =

X−µ
σ

r(1−p)
.
p2

∼ N (0, 1).

• Lognormal distribution: If Y ∼ LN (µ, σ 2 ), then X = loge (Y ) ∼ N (µ, σ 2 ). µy = eµ+σ
2
e2µ+σ .
• Exponential: f (x) = λe−λx , x > 0. µx =
• Uniform distribution: f (x) =

1
,
b−a

1
,
λ

σx2 =

2

/2

2

, σy2 = e2µ+2σ −

1
.
λ2

a < x < b. µx =

a+b
,
2

σx2 =

(b−a)2
.
12

• Central Limit Theorem: If X1 , . . . , Xn are independent random variables each with mean µ and standard
deviation σ, then the following hold approximately:
S = X1 + · · · + Xn ∼ N (nµ, nσ 2 )
X=

X1 + · · · + Xn
σ2
∼ N (µ,
)
n
n

provided n is large (n > 30).
• Large sample confidence interval for µ:

σ
X ± zα/2 √ .
n

Sample size needed to get a desired confidence bound B: n =

2
zα/2
σ2

B2

.

• Confidence interval for p:

r
p˜ ± zα/2
where n
˜ = n + 4, p˜ =

p˜(1 − p˜)
n
˜

X+2
.
n+4

Sample size needed for a desired confidence bound B: n =

1

2
zα/2
p∗ (1−p∗ )

B2

where p∗ is a guess of p.

• Small sample confidence interval for µ:
X ± tn−1,

s
.
n

α/2 √

• Large sample CI for µX − µY based on independent samples:

r
X − Y ± zα/2

2
σ2
σX
+ Y.
nX
nY

• CI for pX − pY :

r
p˜X − p˜Y ± zα/2
where p˜X =

X+1
,
nX +2

p˜Y =

Y +1
,
nY +2

p˜X (1 − p˜X )
p˜Y (1 − p˜Y )
+
n
˜X
n
˜Y

n
˜ X = nX + 2, n
˜ Y = nY + 2.

2
• Small-sample CI for µX − µY based on independent samples, when σX
6= σY2 :

r
X − Y ± tν,

α/2

s2X
s2
+ Y
nX
nY

where


ν=

s2
X
nX

+

/nX )2
(s2
X
nX −1

+

s2
Y
nY

2

/nY )2
(s2
Y
nY −1

,

(1)

rounded down to the nearest integer.
2
= σY2 :
• Small-sample CI for µX − µY based on independent samples, when σX

r
X − Y ± tnX +nY −2,
where

r
sp =

α/2 sp

1
1
+
nX
nY

(nX − 1)s2X + (nY − 1)s2Y
.
nX + nY − 2

• CI for paired data:
D ± tn−1,

sD
α/2 √

n

where D = X − Y .
• Large sample test for µ: Test statistic z∗ =

X−µ
√0.
σ/ n

(
P − value =

• Test for p: Test statistic z∗ = √

p−p
ˆ
0

if H1 : µ > µ0
if H1 : µ < µ0
if H1 : µ 6= µ0 .

(assume np0 > 10, n(1 − p0 ) > 10.)

p0 (1−p0 )/n

(
P − value =

• Small sample test for µ: Test statistic t∗ =

(
P − value =

P (Z > z∗ )
P (Z < z∗ )
2 × P (Z > |z∗ |)

P (Z > z∗ )
P (Z < z∗ )
2 × P (Z > |z∗ |)

if H1 : p > p0
if H1 : p < p0
if H1 : p 6= p0 .

X−µ
√0.
s/ n

P (tn−1 > t∗ )
P (tn−1 < t∗ )
2 × P (tn−1 > |t∗ |)

• Large sample test for µX − µY : Test statistic: z∗ = p

if H1 : µ > µ0
if H1 : µ < µ0
if H1 : µ 6= µ0 .

X−Y −∆0

2 /n +σ 2 /n
σX
X
Y
Y

(
P − value =

P (Z > z∗ )
P (Z < z∗ )
2 × P (Z > |z∗ |)

2

.

if H1 : µX − µY > ∆0
if H1 : µX − µY < ∆0
if H1 : µX − µY =
6 ∆0 .

(2)

p
ˆX −p
ˆY

• Large sample test for pX − pY : Test statistic: z∗ = √

p(1−
ˆ
p)(1/n
ˆ
X +1/nY )

pˆ =

X+Y
nX +nY

where pˆX =

X
nX

, pˆY =

Y
nY

and

.
P (Z > z∗ )
P (Z < z∗ )
2 × P (Z > |z∗ |)

(
P − value =

if H1 : pX > pY
if H1 : pX < pY
if H1 : pX 6= pY .

2
• Small-sample test for µX − µY , independent samples (assume σX
6= σY2 ): Test statistic: t∗ = p

P (tν > t∗ )
P (tν < t∗ )
2 × P (tν > |t∗ |)

(
P − value =

X−Y −∆0
s2
/nX +s2
/nY
X
Y

.

if H1 : µX > µY
if H1 : µX < µY
if H1 : µX 6= µY

where ν is as in (1).
2
• Small-sample test for µX − µY , independent samples (assume σX
= σY2 ): Test statistic: t∗ =

√X−Y −∆0
sp

1/nX +1/nY

,

where sp is as in (2).

(
P − value =

• Test for paired data: Test statistic: t∗ =

P (tnX +nY −2 > t∗ )
P (tnX +nY −2 < t∗ )
2 × P (tnX +nY −2 > |t∗ |)
D−µ
√0 ,
sD / n

(
P − value =

if H1 : µX > µY
if H1 : µX < µY
if H1 : µX 6= µY .

where D = X − Y .

P (tn−1 > t∗ )
P (tn−1 < t∗ )
2 × P (tn−1 > |t∗ |)

if H1 : µD > µ0
if H1 : µD < µ0
if H1 : µD 6= µ0 .

• Chi-Square test of goodness-of-fit: H0 : p1 = p10 , . . . , pk = pk0 . Test statistic:
χ2∗ =

k
X
(Oi − Ei )2

Ei

i=1

where Ei = N pi0 . P -value = P (χ2k−1 > χ2∗ ).
• Chi-Square test of homogeneity: H0 : p1j = p2j = · · · = pIj for each j (j = 1, . . . , J). Test statistic:
χ2∗ =

J
I
X
X
(Oij − Eij )2
i=1 j=1

where Eij =

Oi. O.j
O..

Eij

. P -value= P (χ2(I−1)(J−1) > χ2∗ ).

• Correlation between X and Y :

Pn
i=1

r = pPn

x2
i=1 i

xi yi − nxy

− nx2

pPn
i=1

yi2 − ny 2

• Test for ρ = ρ0 (where ρ0 6= 0): Use the test statistic W = 21 loge
1+ρ
2
1
distributed with mean µW = 21 loge 1−ρ
and variance σW
= n−3
.

.

1+r
,
1−r

which is approximately normally



r n−2
To test for ρ = 0, use U = √
, which has a tn−2 distribution under H0 .
2
1−r

Pn

xi yi −nxy
• Least squares regression coefficients: βˆ1 = Pi=1
, βˆ0 = y − βˆ1 x.
n
2
2
i=1

xi −nx

• 100(1 − α)% CI for β0 and β1 are: βˆ0 ± tn−2,α/2 sβˆ0 and βˆ1 ± tn−2,α/2 sβˆ1 where

s
sβˆ0 = s

1
x2
+ Pn
,
n
(xi − x)2
i=1

sβˆ1 = pPn
i=1

3

s
(xi − x)2

r
and

s=

(1 − r2 )

Pn

(yi − y)2
.
n−2
i=1

• 100(1 − α)% CI for the mean predicted value at x is βˆ0 + βˆ1 x ± tn−2,α/2 syˆ where

s
syˆ = s

(x − x)2
1
+ Pn
n
(xi − x)2
i=1

• 100(1 − α)% prediction interval at x is given by βˆ0 + βˆ1 x ± tn−2,α/2 spred where

s
spred = s

1+

(x − x)2
1
+ Pn
n
(xi − x)2
i=1

Pn

• Regression SS (SSR) = i=1 (ˆ
yi − y)2 , Error SS (SSE) =
Analysis of variance identity: SST = SSR + SSE.
• s2 =

SSE
.
n−p−1

Coefficient of determination R2 =

• To test H0 : β1 = · · · = βp = 0, use F =

Pn
i=1

(yi − yˆi )2 and Total SS (SST) =

SSR
.
SST

SSR/p
.
SSE/(n−p−1)

Under H0 , F ∼ Fp,n−p−1 .

• F -test for one-way ANOVA:
SST r =

I
X

2

2

Ji X i. − N X ..

i=1

SSE =

Ji
I
X
X

I
X

2
Xij


i=1 j=1

2

Ji X i.

i=1

SST r
SSE
M ST r
M ST r =
, M SE =
, F =
I −1
N −I
M SE
Under H0 : µ1 = · · · = µI , F ∼ FI−1, N −I .
• 100(1 − α)% CI for µi is X i. ± tN −I,

q
α/2

M SE
Ji

• Fisher’s Least Significant Difference Method:
– 100(1 − α)% CI for (µi − µj ) is X i. − X j. ± tN −I,α/2

q

M SE( J1i +

1
Jj

)

– To test H0 : µi = µj , reject H0 at level α if

r
|X i. − X j. | > tN −I,α/2

M SE(

1
1
+ )
Ji
Jj

• Bonferroni Method with C simultaneous comparisons:
– 100(1 − α)% CI for (µi − µj ) is X i. − X j. ± tN −I,α/(2C)

q

M SE( J1i +

1
Jj

)

– To test H0 : µi = µj , reject H0 at level α if

r
|X i. − X j. | > tN −I,α/(2C)

M SE(

1
1
+ )
Ji
Jj

• Tukey-Kramer Method for all possible comparisons:
– 100(1 − α)% CI for (µi − µj ) is X i. − X j. ± qI,N −I,α

q

M SE 1
( Ji
2

+

1
Jj

)

– To test H0 : µi = µj , reject H0 at level α if

r
|X i. − X j. | > qI,N −I,α

4

M SE 1
1
( + )
2
Ji
Jj

Pn
i=1

(yi − y)2 .

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close