Lecture 9: Expectation, Indicator Random Variables, Linearity

피망이·2023년 11월 28일
0
post-thumbnail

CDF

  • CDF(누적 분포 함수)

    : F(x)=P(Xx)F(x) = P(X \le x), as a function of real x.

  • Find P(1<X3)P(1 < X \le 3) using F

    : P(X1)+P(1<X3)=P(X3)P(X \le 1) + P(1 < X \le 3) = P(X \le 3)

    P(1<X3)=F(3)F(1)\Rightarrow P(1 < X \le 3) = F(3)-F(1)

    • in general, P(a<Xb)=F(b)F(a)P(a < X \le b) = F(b)-F(a)
  • Properties of CDF

    (1) increasing

    (2) right continuous

    (3) F(x)0  as  xF(x) → 0 \; as \; x → -∞, F(x)  as  xF(x) → ∞ \; as \; x → ∞.

    • This is "if and only if" : 필요 충분 조건
  • Indep. of random variables

    • X, Y are indep. r.v.s if P(Xx,Yy)=P(Xx)P(Yy)P(X \le x, Y \le y) = P(X \le x)P(Y \le y) for all x, y.

    • Discrete case : P(X=x,Y=y)=P(X=x)P(Y=y)P(X=x, Y=y) = P(X=x)P(Y=y)

Averages(Means, Expected values)

  • 1,2,3,4,5,61+2+3+4+5+66=3.5=1+621, 2, 3, 4, 5, 6 → \frac{1+2+3+4+5+6}{6} = 3.5 = \frac{1+6}{2}

    • in Gauss raw, 1nj=1nj=n+12\frac{1}{n} \displaystyle \sum_{j=1}^{n} j = \frac{n+1}{2}
  • | 1, 1, 1, 1, 1, | 3, 3, | 5

    • Two ways

      (1) add, divide by 8

      (2) Weighted Average: 581+283+185\frac{5}{8}•1 + \frac{2}{8}•3 + \frac{1}{8}•5

      • weight is probability!
  • Average of a discrete r.v.X

    • E(X)=xxP(X=x):(value)(PMF)E(X) = \displaystyle \sum_{x} xP(X=x) : (value)(PMF),
      summed over x with P(X=x)>0P(X=x) > 0
  • XBern(p)X \sim Bern(p)

    : E(X)=1P(X=1)+0P(X=0)=pE(X) = 1P(X=1) + 0P(X=0) = p

    • X={1,if  A  occurs0,otherwiseX = \begin{cases} 1, & {if\; A \; occurs} \\ 0, & {otherwise} \\ \end{cases} (indicator r.v.)

    • Then E(X)=P(A)E(X)=P(A) fundamental bridge

      → X를 지시 확률 변수라 생각할 수 있음

  • XBin(n,p)X \sim Bin(n, p)

    : E(X)=k=0nk(nk)pkqnkE(X)= \displaystyle \sum_{k=0}^{n} k \begin{pmatrix}n\\k\\ \end{pmatrix} p^{k} q^{n-k}

    =k=1nn(n1k1)pkqnk= \displaystyle \sum_{k=1}^{n} n \begin{pmatrix}n-1\\k-1\\ \end{pmatrix} p^{k} q^{n-k}

    =npk=1n(n1k1)pk1qnk=np \displaystyle \sum_{k=1}^{n} \begin{pmatrix}n-1\\k-1\\ \end{pmatrix} p^{k-1} q^{n-k}

    =npj=0n1(n1j)pjqn1j=np \displaystyle \sum_{j=0}^{n-1} \begin{pmatrix}n-1\\j\\ \end{pmatrix} p^{j} q^{n-1-j} : j=k1,k=j+1j=k-1, k=j+1

    np\Rightarrow np

  • Linearity

    : E(X+Y)=E(X)+E(Y)E(X+Y) = E(X) + E(Y)

    • even if X, Y are dependent

    : E(cX)=cE(X)E(cX) = cE(X)

    • if c is a const.
  • Redo Bin

    • np, by linearly

    • since X=X1+...+XnX=X_1 + ... + X_n, XjBern(p)X_j \sim Bern(p)

  • Ex. 5 card hard, X=(# of aces).

    • Let XjX_j be indicator of jth card being our ace, 1j51 \le j \le 5

    • E(X)=E(X1+...+X5)=E(X1)+...+E(X5)=symmetry  5E(X1)E(X)= E(X_1 + ... + X_5) = E(X_1) + ... + E(X_5) = symmetry \; 5E(X_1)

      fund. bridge, =5P(1st  card  ace)=513= 5P(1st \; card \; ace) = \frac{5}{13}, even though XjX_j's are dependent.

    • This gives expected value of any Hypergeometric.

  • Geom(p)

    : indep. Bern(p) trials, count # failures before 1st success.

    ex. F F F F F S : P(X=5)=q5pP(X=5) = q^{5} p

    • Let XGeom(p),  q=1pX \sim Geom(p), \; q=1-p.

    • PMF: P(X=k)=qkpP(X=k)=q^{k} p, k{0,1,2,...}k \in \{0, 1, 2, ...\}

      valid since k=0pqk=pk=0qk=p1q=1\displaystyle \sum_{k=0}^{∞} pq^{k} = p\displaystyle \sum_{k=0}^{∞} q^{k} = \frac{p}{1-q} = 1 (기하 급수)

  • XGeom(p)X \sim Geom(p)

    : E(X)=k=0kpqkE(X) = \displaystyle \sum_{k=0}^{∞} kpq^{k}

    • cf. k=0qk=1q\displaystyle \sum_{k=0}^{∞} q^{k} = \frac{1}{q} ← derivative

      k=0=kqk11(1q)2\Rightarrow \displaystyle \sum_{k=0}^{∞} = kq^{k-1} \frac{1}{(1-q)^2}

      k=0kqk=q(1q)2\Rightarrow \displaystyle \sum_{k=0}^{∞} kq^{k} = \frac{q}{(1-q)^2}

  • Story Proof

    • Let c=E(X)c=E(X)

    • c=0p+(1+c)q=q+cqc = 0•p + (1+c)•q = q + cq

      c=q1q=qp\Rightarrow c = \frac{q}{1-q} = \frac{q}{p}


profile
물리학 전공자의 프로그래밍 도전기

0개의 댓글