# All Science Fair Projects

## Science Fair Project Encyclopedia for Schools!

 Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary

# Science Fair Project Encyclopedia

For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.

# White noise

White noise is a signal (or process) with a flat frequency spectrum. In other words, the signal has equal power in any band, at any centre frequency, having a given bandwidth.

An infinite-bandwidth white noise signal is purely a theoretical construct. By having power at all frequencies, the total power of such a signal would be infinite. In practice a signal can be "white" with a flat spectrum over a defined frequency band.

 Contents

## Statistical properties

A signal that is "white" in the frequency domain must have certain important statistical properties in time. For example, it must have zero autocorrelation with itself over time, except at zero timeshift. Conversely, if the autocorrelation of a signal has those properties (zero except at zero timeshift), the signal is white.

The term white noise is also commonly applied to a noise signal in the spatial domain which has zero autocorrelation with itself over the relevant space dimensions. The signal is then "white" in the spatial frequency domain (this is equally true for signals in the angular frequency domain, e.g. the distribution of a signal across all angles in the night sky).

Being uncorrelated in time does not however restrict the values a signal can take. Any distribution of values is possible (although it must have zero DC component). For example, a binary signal which can only take on the values 1 or 0 will be white if the sequence of zeros and ones is statistically uncorrelated. Noise having a continuous distribution, such as a normal distribution, can of course be white.

It is often incorrectly assumed that Gaussian noise (see normal distribution) is necessarily white noise. However, neither property implies the other. Thus, the two words "Gaussian" and "white" are often both specified in mathematical models of systems. Gaussian white noise is a good approximation of many real-world situations and generates mathematically tractable models, and is used so frequently that the term additive white Gaussian noise has a standard abbreviation: AWGN.

White noise is the generalized mean-square derivative of the Wiener process or Brownian motion.

## Colors of noise

There are also other "colors" of noise, the most commonly used being pink and brown.

## Applications

One use for white noise is in the field of architectural acoustics. Here in order to submerge distracting, undesirable noises (for example conversations, etc.,) in interior spaces, a constant low level of noise is generated and provided as a background sound.

White noise has also been used in electronic music, where is it used either directly or as an input for a filter to create other types of noise signal.

To set up the EQ for a concert or other performance in a venue, white noise is sent through the PA system and monitored from various points in the venue so that the engineer can tell if the acoustics of the building naturally boost or cut any frequencies and can compensate with the mixer.

White noise is used as the basis of some random number generators.

## Random vector transformations

### Whitening a random vector

In statistics, a random vector is said to be "white" if its elements are uncorrelated and have unit variance. This corresponds to a flat power spectrum.

A vector can be whitened to remove nonzero correlations. This is useful in various procedures such as data compression.

One common method for whitening a vector $\mathbf{x}$ with mean $\mathbf{\mu}$ is to perform the following calculation:

$\mathbf{w} = \Lambda^{-1/2}\ E^T\ ( \mathbf{x} - \mathbf{\mu} )$

where:

$\mathbf{x}$ is the vector to be whitened,
E is the orthogonal matrix of eigenvectors of the covariance matrix $\mathbb{E}\{(\mathbf{x}-\mathbf{\mu})(\mathbf{x}-\mathbf{\mu})^T\}$
Λ = diag1,...,λn) is the diagonal matrix of corresponding eigenvalues
$\mathbf{w}$ is the whitened vector with zero mean and the identity matrix for its covariance matrix

and

$\Lambda^{-1/2} = diag(\lambda_1^{-1/2},...,\lambda_n^{-1/2})$

### Simulating a random vector

We can also simulate the 1st and 2nd moment properties of any random vector $\mathbf{x}$ with mean $\mathbf{\mu}$ via the following transformation of a white vector $\mathbf{w}$:

$\mathbf{x} = E\ \Lambda^{1/2}\ \mathbf{w} + \mu$

## Random signal transformations

### Whitening a continuous-time random signal

Suppose we are given a wide-sense stationary, continuous-time random process $x(t) : t \in \mathbb{R}\,\!$ with constant mean μ and covariance function

$K_x(\tau) = \mathbb{E} \left\{ (x(t_1) - \mu) (x(t_2) - \mu)^{*} \right\} \mbox{ where } \tau = t_1 - t_2$
$S_x(\omega) = \int_{-\infty}^{\infty} K_x(\tau) \, e^{-j \omega \tau} \, d\tau$

We can whiten this signal using frequency domain techniques.

Because Kx(τ) is Hermitian symmetric and positive semi-definite, it follows that Sx(ω) is real and can be factored as

$S_x(\omega) = | H(\omega) |^2 = H(\omega) \, H^{*} (\omega)$

if and only if Sx(ω) satisfies the Paley-Wiener criterion .

$\int_{-\infty}^{\infty} \frac{\log (S_x(\omega))}{1 + \omega^2} \, d \omega < \infty$

If Sx(ω) is a rational function, we can then factor it into pole-zero form as

$S_x(\omega) = \frac{\Pi_{k=1}^{N} (c_k - j \omega)(c^{*}_k + j \omega)}{\Pi_{k=1}^{D} (d_k - j \omega)(d^{*}_k + j \omega)}$

Choosing a minimum phase H(ω) so that its poles and zeros lie inside the left half s-plane, we can then whiten x(t) with the following inverse filter

$H_{inv}(\omega) = \frac{1}{H(\omega)}$

We choose the minimum phase filter so that the resulting inverse filter is stable. Additionally, we must be sure that H(ω) is strictly positive for all $\omega \in \mathbb{R}$ so that Hinv(ω) does not have any singularities.

The final form of the whitening procedure is as follows:

$w (t) = \mathcal{F}^{-1} \left\{ H_{inv}(\omega) \right\} * (x(t) - \mu)$

so that w(t) is a white noise random process with zero mean and constant, unit power spectral density

$S_{w}(\omega) = \mathcal{F} \left\{ \mathbb{E} \{ w(t_1) w(t_2) \} \right\} = H_{inv}(\omega) S_x(\omega) H^{*}_{inv}(\omega) = \frac{S_x(\omega)}{S_x(\omega)} = 1$

Note that this power spectral density corresponds to a delta function for the covariance function of w(t).

$K_w(\tau) = \,\!\delta (\tau)$

### Simulating a continuous-time random signal

We can simulate any wide-sense stationary, continuous-time random process x(t) defined with the same mean μ, covariance function Kx(τ), and power spectral density Sx(ω) as above. Again, we use frequency domain techniques.

We factor the power spectral density of the desired signal x(t)

$S_x(\omega) = \,\!|H(\omega)|^2$

and choose the minimum phase H(ω).

We can simulate x(t) by constructing the following linear, time-invariant filter

$\hat{x}(t) = \mathcal{F}^{-1} \left\{ H(\omega) \right\} * w(t) + \mu$

where w(t) is a continuous-time, white-noise signal with the following 1st and 2nd moment properties:

$\mathbb{E}\{w(t)\} = 0$
$\mathbb{E}\{w(t_1)w^{*}(t_2)\} = K_w(t_1, t_2) = \delta(t_1 - t_2)$

Thus, the resultant signal $\hat{x}(t)$ will have the same 2nd moment properties as the desired signal x(t).