12 Expected Value and Variance
12.1 Expected Value of a Random Variable
The expected value of a random variable is the arithmetic mean of the possible values a random variable can take, weighted by the probability of those outcomes. It is usually denoted \(E[X]\) or \(\mu_X\). ### Discrete Case If \(X\) is a discrete random variable having a probability mass function \(p(x)\), then the expected value of \(X\) is defined by \[\text E[X] = \sum_{x : p(x) > 0}x \cdot p(x)\] In other words, the expected value of \(X\) is a weighted average of the possible values that \(X\) can take on, each value being weighted by the probability that \(X\) assumes that value.
Example: If the pmf of \(X\) is given by \[p(1) = \frac{1}{3} \hskip6em p(2) = \frac{2}{3} \] then \(\text E[X] = 1 \times \frac{1}{3} + 2 \times \frac{2}{3} = \frac{5}{3}\).
Example: Let \(A\) be a nonempty set. Consider the random variable \(I\) with range \(\mathfrak R(I) = \{0,1\}\) and with pmf the indicator function \(I_A\) where: \[I_A(x) = \begin{cases}1 & \text{if } x \in A \\ 0 &\text{otherwise}\end{cases}\] Find \(E(I)\).
Solution: Since \(I_A(I = 1) = P(A)\) and \(I_A(I = 0) = P(A^c)\), we have \[\text E[I] = 1 \times P(A) + 0 \times P(A^c) = P(A)\] That is, the expected value of \(I\) is just the probability of \(A\). ### Continuous Case If \(X\) is a continuous random variable having a probability density function \(f(x)\), then the expected value of \(X\) is defined by \[\text E[X] = \int_{-\infty}^\infty x\cdot f(x) dx\] Example: Let \(X\) be a continuous random variable with pdf \[f(x) = 4x e^{-2x} \hskip4em x > 0\]. Calculate the expectation of \(X\).
Solution: We have \[\text E[X] = \int_{-\infty}^\infty 4x^2 e^{-2x} dx = \frac{1}{2} \int_0^\infty t^2 e^{-t} dt = \frac{1}{2}\times 2! = 1\]
**Propertyies: \(\text E[aX + b] = aE[X] + b\) * \(\text E[X] = \int_0^\infty S(x) dx + \int_{-\infty}^0 F(x) dx\)
12.2 Expectation of a Function of Random Variable
12.2.1 Discrete Case
If \(X\) is a discrete random variable with probability mass function \(p(x)\), then for any real-valued function \(g\): \[\text E[g(X)] = \sum_{x: p(x) > 0} g(x) p(x)\] Example: Suppose \(X\) has the following probability mass function: \[p(0) = 0.2,\qquad p(1) = 0.5,\qquad p(2) = 0.3\] Calculate \(\text E[X^2]\).
Solution: \[\text E[X^2] = 0(0.2) + 1(0.5) + 4(0.3) = 1.7\] ### Continuous Case If X is a continuous random variable with probability density function \(f(x)\), then for any real-valued function \(g\): \[\text E[g(X)] = \int_{-\infty}^\infty g(x) f(x)dx\]
Example: Let \(X\) be uniformly distributed over \((0,1)\), i.e. \(f(x) = I_{[0,1]}(x)\). Calculate \(E[X^3]\).
Solution: \[\text E[X^3] = \int_0^1 x^3 dx = \frac{x^4}{4} \Bigg|_0^1 = \frac{1}{4}\] ## Variance of a Random Variable Another quantity of interest is the variance of a random variable \(X\), denoted by \(\text{Var}(X)\), which is defined by \[\text{Var}(X) = E\Big[(X-E[X])^2\Big] \] The standard variation of a random variable \(X\) is denoted \(\sigma_X\) and is defined as: \[\sigma_X = \sqrt{\text{Var}(X)}\] Properties: * \(\text{Var}(X)= \text E[X^2] - \Big(\text E[X]\Big)^2\) * \(\text{Var}(c) = 0\) * \(\text{Var}(aX+b) = a^2 \text{Var}(X)\)
12.2.2 Coefficient of Variation
The coefficient of variation is defined as the ratio of the standard deviation \(\sigma_X\) to the mean \(\mu_X\): \[CV[X] = \frac{\sigma_X}{\mu_X}\] It shows the extent of variability in relation to the mean of the population.