Conditional Mean, Variance, and MMSE

Nov 30, 2023·
Zakir Hussain Shaik
Zakir Hussain Shaik
· 3 min read

In this blog post, we take a closer look at the relationships between conditional mean, variance, conditional Probability Density Functions (PDFs), and how they connect to the Minimum Mean Square Error (MMSE) estimator. By breaking down these concepts step by step, we aim to make the topic approachable while maintaining its technical depth.

Conditional Probability Density Function

To understand the MMSE estimator, we first need to explore the concept of conditional probability density functions (PDFs). Consider two random variables, $X$ and $Y$ (which could be complex). Let $f_{X|Y}(x|y)$ represent the conditional PDF.

Conditioning on a Realization $Y=y$

When conditioned on a specific realization $Y=y$ (i.e., $X|Y=y$), the corresponding PDF becomes $f_{X|Y=y}(x)$. This PDF depends on $x$, its mean, and variance, represented as:

Conditioning on a Random Variable $Y$

In contrast, when conditioned on the random variable $Y$ (i.e., $X|Y$), the corresponding PDF becomes $f_{X|Y}(x|y)$, a function of both $x$ and $y$. Its mean and variance, denoted as $\mathbb{E}[X|Y]$ and $\mathbb{V}[X|Y]$, are functions of $Y$. Unlike the previous case, these terms are random variables. Specifically, each realization of $\mathbb{E}[X|Y]$ takes the form $\mathbb{E}[X|Y=y]$, and similarly, each realization of $\mathbb{V}[X|Y]$ is $\mathbb{V}[X|Y=y]$. Thus, the posterior density of $X$ varies for different realizations of $Y=y$.

MMSE Estimator

Building on the concept of conditional PDFs, we now define the MMSE estimator. The MMSE estimator is given as:

$$\widehat{X}= g(Y) = \mathbb{E}[X|Y]$$

Here, $g(\cdot)$ represents a function. Therefore, the MMSE estimator of $X$ is a function of the random variable $Y$, making the estimator itself a random variable.

For a specific realization $Y=y$, the estimate is:

$$\widehat{x}= g(y) = \mathbb{E}_{X}[X|Y=y]$$

This represents the mean of the posterior density $f_{X|Y=y}(x)$.

The Mean Square Error (MSE) for a particular realization $Y=y$ of the MMSE estimator is:

$$\mathbb{E}_{X}[|X - \widehat{X}|^2| Y = y]= \mathbb{V}_{X}[X|Y=y]$$

This quantity represents the variance of the posterior density $f_{X|Y=y}(x)$.

However, our primary interest lies in the MSE of the estimator across all realizations of $Y$. The error variance $\mathbb{V}_{\epsilon}$ of the MMSE estimator is:

$$\mathbb{V}_{\epsilon}= \mathbb{E}_{X,Y}[|X - \mathbb{E}[X|Y]|^2]= \mathbb{E}_{Y}[\mathbb{E}_{X}[|X - \mathbb{E}[X|Y]|^2|Y]]= \mathbb{E}_{Y}[\mathbb{V}[X|Y]]$$

An intriguing result emerges when exploring the relationship between the expectation of conditional variance and the variance of conditional expectation:

$$\mathbb{V}[X] = \mathbb{E}[\mathbb{V}[X|Y]] + \mathbb{V}[\mathbb{E}[X|Y]]$$

This result is known as the law of total variance. It is analogous to the law of total expectation:

$$\mathbb{E}_Y[\mathbb{E}_X[X|Y]] = \mathbb{E}_X[X]$$

For completeness, note the following identity:

$$\mathbb{E}_Y[\mathbb{E}_X[g(X)|Y]] = \mathbb{E}_X[g(X)]$$

where $g(\cdot)$ is any function.

MMSE Estimator for Linear Model

To further illustrate the MMSE estimator, let us consider its application in a linear model. When dealing with random vectors (denoted by lowercase bold letters), consider a scenario with a Gaussian prior on the signal vector $\mathbf{x} \sim \mathcal{CN}(\boldsymbol{\mu}_{\mathbf{x}}, \mathbf{C}_{\mathbf{x}})$, independent additive Gaussian noise $\mathbf{n} \sim \mathcal{CN}(\mathbf{0}, \mathbf{C}_{\mathbf{n}})$, and a linear receive model incorporating a known channel $\mathbf{H} \in \mathbb{C}^{N\times K}$:

$$\mathbf{y} = \mathbf{Hx} + \mathbf{n}$$

The posterior density of $\mathbf{x}|\mathbf{y}$ can be expressed as:

Here:

and:

In both expressions, the alternative forms apply when the covariances of the signal vector and noise are invertible. Notably, we use $\mathbf{C}$ to denote covariance matrices, as this is a common convention.

Remarkably, the conditional covariance is independent of $\mathbf{y}$, which is unusual since it typically depends on $\mathbf{y}$. This implies that the Mean Square Error (MSE) of the MMSE estimator equals the conditional covariance:

$$\mathbf{C}_{\epsilon}= \mathbb{E}_{\mathbf{y}}[\mathbf{C}_{\mathbf{x}|\mathbf{y}}] = \mathbf{C}_{\mathbf{x}|\mathbf{y}}$$

In conclusion, understanding the interplay between conditional mean, variance, and conditional PDFs provides valuable insights into the MMSE estimator. By exploring these concepts step by step, particularly in scenarios like the MMSE estimator for a linear model with Gaussian prior and independent additive Gaussian noise, we can better understand the relationships between posterior density, conditional mean, covariance, and the MSE of the MMSE estimator.

Zakir Hussain Shaik
Authors
Senior Research Specialist | Wireless Communications

Comments