Brownian Motion Process: Definition and Properties [11/05]

Prepared by Le Fang, Simranjot Singh and Devesh Mulmule

1. Introduction

Brownian motion is the main process for the calculus of continuous processes, which was described by Botanist R. Brown as the motion of a pollen particle suspended in fluid in 1828. It was observed that a particle moved in an irregular, random fashion. A. Einstein, in 1905, argued that the movement is due to bombardment of the particle by the molecules of the fluid, he obtained the equations for Brownian motion. In 1900, L. Bachelier used the Brownian motion as a model for movement of stock prices in his mathematical theory of speculation. The mathematical foundation for Brownian motion as a stochastic process was done by N. Wiener in 1931, and this process is also called the Wiener process.

The Brownian Motion process B(t) serves as a basic model for the cumulative effect of pure noise. If B(t) denotes the position of a particle at time t, then the displacement B(t) − B(0) is the effect of the purely random bombardment by the molecules of the fluid, or the effect of noise over time t.

2. Defining Properties of Brownian Motion

Brownian motion {B(t)} is a stochastic process with the following properties.

  1. (Independence of increments) B(t) − B(s) for t > s is independent of the past, that is, independent of $ B(u) , 0 \leq u \leq s$, or of $ {F (s)}$ , the ${\sigma-algebra}$ generated by ${B(u), u \leq s}$.
  2. (Normal increments) ${B(t) − B(s)}$ has Normal distribution with mean $ {0}$ and variance ${t − s}$. This implies taking $ {s = 0}$ that $ {B(t) − B(0)}$ has $ {N (0, t)}$ distribution.
  3. (Continuity of paths) $ {B(t), t ≥ 0}$ are continuous functions of $ {t}$.

Sometimes, another version of Brownian Motion, ${W(t)=B(t+s)-B(s)}$, is used, as so called “Wiener Process” or “Standard Brownian Motion”.

3. State Transition Kernel of Brownian Motion
3.1. Transition Probability Functions

If the process is started at $ {x}$, $ {B(0) = x}$, then $ {B(t)}$ has the $ {N (x, t)}$ distribution. More generally, the conditional distribution of $ { B(t + s)}$ given that $ {B(s) = x}$ is $ {N (x, t)}$. The transition function $ {P (y, t, x, s)}$ is the cumulative distribution function of this distribution,

$ \displaystyle P (y, t, x, s) = P(B(t + s) ≤ y|B(s) = x) = P_x (B(t) ≤ y)\ \ \ \ \ (1)$

The density function of this distribution is the transition probability density function of Brownian motion,

$ \displaystyle p_t (x,y) = (1/\sqrt{2\pi t}) e^{-{(y-x)^2/2t}}\ \ \ \ \ (2)$

The finite-dimensional distributions can be computed with the help of the transition probability density function, by using independence of increments.

$ \displaystyle \begin{split}P_x (B(t_1 ) ≤ x_1 , B(t_2 ) ≤ x_2 , . . . , B(t_n ) ≤ x_n ) = \\ \int_{-\infty}^{x_1}p_{t_1}(x,y_1)d{y_1} \int_{-\infty}^{x_2}p_{{t_2}-{t_1}}(y_1,y_2)d{y_2}…\int_{-\infty}^{x_n}p_{{t_n}-{t_{n-1}}}(y_{n-1},y_n)d{y_n}\end{split}\ \ \ \ \ (3)$

3.2. Space and time Homogeneity

It is easy to see that the one-dimensional distributions of Brownian motion satisfy $ {P_0 (B(t) ∈ A) = P_x (B(t) ∈ x + A)}$, where A is an interval on the line. All finite-dimensional distributions of $ {B_x (t)}$ and $ {x + B_0 (t)}$ are the same. Thus $ {B_x (t) − x}$ is Brownian motion started at $ {0}$, and $ {B_0 (t) + x}$ is Brownian motion started at x, in other words

$ \displaystyle B^x (t) = x + B^0(t)\ \ \ \ \ (4)$

This property is called the space homogeneous property of Brownian Motion. Generally, a stochastic process is called space-homogeneous if its finite- dimensional distributions do not change with a shift in space, namely if
$ {P(X(t_1 ) ≤ x_1 , X(t_2 ) ≤ x_2 , . . . X(t_n ) ≤ x_n |X(0) = 0) \\= P(X(t_1 ) ≤ x_1 + x, X(t_2 ) ≤ x_2 + x, . . . X(t_n ) ≤ x_n + x|X(0) = x)}$.

On the other hand, Brownian motion also bears the time homogeneous property as below.

$ {P(B(t) ≤ y |B(s) = x) = P(B(t-s) ≤ y|B(0) = x)}$

Actually, all finite-dimensional distributions of Brownian motion are time homogeneous.

Four realizations of Brownian motion $ {B = B(t)}$ started at 0 are exhibited in Figure 1. Although it is a process governed by the pure chance with zero mean, it has regions where motion looks like it has “trends”.


Figure 1: Four realizations or paths of Brownian motion B(t).
4. Brownian Motion as a Gaussian Process

The covariance function of the process X(t) is defined by

$ \displaystyle \begin{split}\gamma(s, t) = Cov (X(t), X(s)) \\= E (X(t) − EX(t)) (X(s) − EX(s)) \\= E (X(t)X(s)) − EX(t)EX(s)\end{split}\ \ \ \ \ (5)$

Theorem: A Brownian motion is a Gaussian process with zero mean function, and covariance function $ {min(t, s)}$. Conversely, a Gaussian process with zero mean function, and covariance function $ {min(t, s)}$ is a Brownian motion.

Proof: Since the mean of the Brownian motion is zero,
$ {\gamma(s, t) = Cov (B(t), B(s)) = E (B(t)B(s))}$.
If ${t < s}$ then ${B(s)= B(t) + B(s) − B(t)}$, and

$E(B(t), B(s)) = EB^2(t) + E(B(t)(B(s) − B(t)))= EB^2(t) = t$,

where we used independence of increments property. Similarly if $t > s$, $E(B(t)B(s)) = min(t, s)$.

To show the converse, let $t$ be arbitrary and $s ≥ 0$. $X(t)$ is a Gaussian process, thus the joint distribution of $X(t), X(t + s)$ is a bivariate Normal, and by conditions has zero mean. Therefore the vector $\displaystyle (X(t), X(t+s)−X(t)$  is also bivariate Normal. The variables $X(t)$ and $X(t + s) − X(t)$ are uncorrelated, using that $Cov(X(t), X(t + s)) = min(t, s)$, $Cov(X(t), X(t+s)−X(t)) = Cov(X(t), X(t+s))−Cov(X(t), X(t)) = t−t = 0$.

A property of the multivariate Normal distribution implies that these variables are independent. Thus the increment $X(t + s) − X(t)$ is independent of $X(t)$ and has $N (0, s)$ distribution. Therefore it is a Brownian motion.

Example 4.1 Find the distribution of $\int_{0}^{1} B(t) dt$

To determine the distribution of:

$ \displaystyle \int_{0}^{1} B(t) dt$

We can get this by taking limit of distribution of approximating sums

$ \displaystyle \sum B(t_{i})\Delta_{i}$, where $ {\Delta_{i} = t_{i+1} – t_{i}}$

If, for example, $ {t_{i} = i/n}$, then for $ {n=4}$ the approximating sum is $\displaystyle {1/4 (B(1/4) + B(1/2) + B(3/4) + B(1))}$. The distribution of above expression was found to be normal variable with mean 0 and variance 15/32. Similarly, distribution of all of the approximating sums is Normal with zero mean. Also, it can be shown that the limit of Gaussian distributions is a Gaussian distribution. Thus, the mean of required distribution is zero. Its variance is calculated as follows:

$ \displaystyle Var(\int_{0}^{1}B(t)dt) = Cov(\int_{0}^{1} B(t)dt, \int_{0}^{1}B(s)ds)$

$ \displaystyle =E(\int_{0}^{1} B(t)dt, \int_{0}^{1}B(s)ds)$

$ \displaystyle =\int_{0}^{1} \int_{0}^{1}E(B(t) B(s))dt ds$

$ \displaystyle =\int_{0}^{1} \int_{0}^{1}Cov(B(t) B(s))dt ds$

$ \displaystyle =\int_{0}^{1} \int_{0}^{1}min(t,s)dt ds$

$ \displaystyle =1/3$

5.Properties of Brownian Motion Paths
5.1 Quadratic Variation of Brownian Motion

The quadratic variation of Brownian motion Q(t) is defined as:

$ \displaystyle Q(t)=Q([0,t])=lim\sum_{i=1}^{n}|B(t_i^{n})-B(t_{i-1}^{n})|^{2}$

where the limit is taken over all shrinking partitions of [0, t], with

$ \displaystyle \delta_n = max_i (t_{i+1}^{n}-t_i^n )$

becomes 0 as n approaches $ {\infty}$ .

Let the sequence of partitions be such that interval is divide into two and the sub-interval also divided to two and so on.

Let $ {T_n = \sum_{i=1}^{n}|B(t_i^{n})-B(t_{i-1}^{n})|^{2}}$

$ \displaystyle E(T_n) = E(\sum_{i=1}^{n}|B(t_i^{n})-B(t_{i-1}^{n})|^{2})$

$ \displaystyle =\sum_{i=1}^n (t_i^{n} – t_{i-1}^{n})$

$ \displaystyle = t-0 = t$

Similarly,

$ \displaystyle Var(T_n) = Var(\sum_{i}|B(t_i^{n})-B(t_{i-1}^{n})|^{2})$

$ \displaystyle = \sum_{i}Var(B(t_i^{n})-B(t_{i-1}^{n}))^{2}$

$ \displaystyle = \sum_{i}3(t_i^{n} – t_{i-1}^n)^{2}$

(since fourth moment of N(0,$ {\sigma^2}$) is $ {3\sigma^4}$)

$ \displaystyle \leq \sum_{i}3max(t_i^{n} – t_{i-1}^n)t$

$ \displaystyle =3t\delta_n$

Hence, $ {\sum_{i=1}^n Var(T_n) \leq \infty}$. Using monotone convergence theorem, we find $ {E(\sum_{i=1}^n (T_n-E(T_n))^{2} \leq \infty}$. This implies that the series inside the expectation converges almost surely. Hence its terms converge to zero, i.e. $ {T_n-E(T_n)}$ becomes 0 almost surely, consequently ${T_n}$ tends to t almost surely.

5.2 Properties of Brownian motion

B(t)’s as functions of t have the following properties. Almost every sample path B(t), $ {0 \leq t \leq T}$

1. is a continuous function of t;
2. is not monotone in any interval, no matter how small the interval is;
3. is not differentiable at any point;
4. has infinite variation on any interval, no matter how small it is;
5. has quadratic variation on [0, t] equal to t, for any t.

Properties 1 and 3 of Brownian motion paths state that although any B(t) is a continuous function of t, it has increments $ {\Delta B(t)}$, over a time interval $ {\Delta t}$, much larger than $ {\Delta t}$, as $ {\Delta t}$ tends to 0.

For property 4, a continuous function with finite variation has 0 quadratic variation.

For property 2, a monotone function has finite variation.

6. Three Martingales of Brownian motion

A stochastic process {X(t), $ {t \geq 0}$} is a martingale if
1) for any t it is integrable, $ {E|X(t)| < \infty}$,
2) for any $ {s > 0}$,

$ \displaystyle E(X(t + s)|F(t)) = X(t)$

where the F(t) is the information up-till time t and equality holds almost surely. In other words, if we have the values up-till time t and $ {X(t)=x}$, then expected value of future values is x.

Let B(t) be any Brownian motion. Then

1. B(t) is a martingale.
2. $ {B(t)^2-t}$ is a martingale.
3. For any u, $ {e^{uB(t)-(u^2/2)t}}$ is a martingale.

Proof of the above properties:

Note: The key idea to establish the martingale property of Brownian motion is using the independent increment property, which implies

$\displaystyle E(g(B_{t+s}-B_t)|F_t)=E(g(B_{t+s}-B_t))$. Here take the proof of second martingale above as an example.

$ \displaystyle B(t+s)^2=(B(t)+B(t + s)-B(t))^2$

$ \displaystyle =B(t)^2+2B(t)(B(t+s)- B(t))+(B(t+s)-B(t))^2$

Now, $ {E(B(t+s)^2|F(t)))}$

$ \displaystyle =B(t)^2+2E(B(t)(B(t + s)- B(t))|F(t))+E(B(t+s)-B(t))^2|F(t))$

$ \displaystyle =B(t)^2+s$

Subtracting (t+s), we get $ {B(t)^2-t}$ is a martingale.

7. Markov Property of Brownian Motion & Stopping time
7.1 Stopping Time

A random time $T$ is called a stopping time for $B(t)$, where $t\ge 0$, if for any $t$ it is possible to decide whether $T$ has occurred or not by observing $B(s)$ for $0\le s\le t$. More rigorously, for any $t$ the sets $\{T\le t\}\in F_t$, the $\sigma$-algebra generated by $B(s)$ for $0\le s\le t$.

Examples:

– Any non-random time $T$ is a stopping time. $\{T\le t\}$ is either $\emptyset$ or $\Omega$.
– The first time B(t) takes value 1, $\min_t B(t)=1$, is stopping time.

$ \displaystyle {\{ T\le t\}= \{B(u)<1}$ , for all $ u\le t\}\in F_t $.
– The time Brownian motion reaches maximum in $[0,1]$ is not stopping time.
– The time Brownian motion last crosses 0 is not stopping time.

7.2 Markov property and strong Markov property

Brownian motion has Markov property:

$\displaystyle P(B(t+s)\le y|F_t)=P(B(t+s)\le y|B(t))$

Proof: use exponential Martingale:

$ E(\exp(u\cdot B(t+s))|F_t) = \exp(u\cdot B(t))\cdot \mathbf E(\exp(u(B(t+s)-B(t)))|F_t)
= \exp(u\cdot B(t))\cdot \mathbf E(\exp(u(B(t+s)-B(t))))$

Brownian motion has strong Markov property: for any finite stopping time $T$, $\displaystyle P(B_{T+t}|F_T)=P(B_{T+t}|B_T)$.

The strong Markov property is similar to the Markov property, except that in the definition a fixed time t is replaced by a stopping time T. The proof of the strong Markov property can be done using the exponential martingale and the Optimal Stopping Theorem.

8. Hitting Time and Exit Time

Hitting time $\tau_x$ denotes the first time $B(t)$ hits level $x$: $\tau_x=\inf\{t>0:B(t)=x\}$
Exit time $\tau$ denotes the first time $B(t)$ exit an interval $(a,b)$: $\displaystyle \tau=\min(\tau_a,\tau_b)$

Claim: Consider a Brownian motion process $B_x(t)$ starting from $a<x<b$ and an exit time $\tau=\min(\tau_a,\tau_b)$. We have $P_x(\tau<\infty)=1$ and $E_x\tau<\infty$

Corollary: $P_a(\tau_b<\infty)=1$ and $P_a(\tau_a<\infty)=1$.

8.1 Distribution of Hitting Time

Let the maximum and minimun of Brownian motion $B_0(s)$ starting from 0 on $[0,t]$ be $M(t)=\max_{0\le s\le t} B(s)$ and $m(t)=\min_{0\le s\le t} B(s)$.

Let $\tau_x=\inf\{t>0: B(t)=x\}$. Then for any $x>0$, we have $\displaystyle P_0(M(t)\ge x)=2\cdot P_0(B(t)\ge x)=2(1-\Phi({x\over\sqrt t}))$.

Proof: Note the equivalence of two events: $\{M(t)\ge x\}=\{\tau_x\le t\}$, and that $P(B(t)\ge x)=P(B(\tau_x+(t-\tau_x))-B(\tau_x)\ge 0, \tau_x\le t)=P(\hat B(t-\tau_x)>0)\cdot P(M(t)\ge x)$ by strong Markov property.

Corollary: $P(B(t)\le 0, \forall 0\le t\le 1)=0$

The probability density of $\tau_x$ is inverse gamma density $\displaystyle f_{\tau_x}={|x|\over\sqrt{2\pi}}t^{-{1\over2}}\exp(-{x^2\over 2t})$, and $E \tau_x=\infty$. Hence although $x$ is visited with probability 1, the expected time for $x$ to be visited is infinite.

Proof: Note the equivalence of two events: $\{M(t)\ge x\}=\{\tau_x\le t\}$. Hence $P(\tau_x\le t)=P(M(t)\ge x)=2P(B(t)\ge x)=\dots$. Differentiate over $t$.

9 Reflection Principle

Let $T$ be a stopping time. Define $\hat B(t)=B(t)$ for $t\le T$ and $\hat B(t)=2B(T)-B(t)$ for $t\ge T$. Then $\hat B$ is also a Brownian motion. Below, Figure 2 shows this principle.


Figure 2: Reflection Principle of Brownian Motion
Reference:

Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press.

Peter Tankov and Rama Cont (2003). Financial Modelling with Jump Processes. Chapman & Hall/CRC Financial Mathematics Series. On Course web site

Brownian Motion Process: Definition and Properties [11/05]