# Moving Average Process(MA)

## Formal Definition MA(1) Process:

\begin{equation}

X_t = \epsilon_t + \theta \cdot \epsilon_{t-1}

\end{equation}

where $X_t$ is state of the system at time t,

$\epsilon_t$ is some effect/noise at time t,

$\theta$ is factor constant

$\epsilon_t-1$ is some effect/noise at time t-1.

We assume that $\epsilon_t$ is I.I.D $\mathbf{N}(0, \sigma^2)$.

## Example 1:

Let $X_t$ represents “change in demand for lemonade at time t” $\Delta\mathrm{lemon}_t$ and $\epsilon_t$ represents change in temperature. If MA(1) for the $\Delta\mathrm{lemon}_t$ is given by following formula:

\begin{equation}

\Delta\mathrm{lemon}_t = \epsilon_t – 0.5 \cdot \epsilon_{t-1}

\end{equation}

If temperature increase at time t, $\epsilon_t > 0$ so $\Delta\mathrm{lemon}_t > 0$ . It means that demand for lemonade increases at time t. At time (t+1) $\epsilon_{t+1} = 0$ , first term will be zero but there will be negative effect.

So there will be decrease in demand for lemonade relative to yesterday. This may be because they already have the lemonade from yesterday, the ‘residue’.

## Example 2:

Let $X_t$ represents “change in oil price at time t” $\Delta\mathrm{OilPrice}_t$ and $\epsilon_t$ represents hurricane at sea. If MA(1) for the $\Delta\mathrm{OilPrice}_t$ is given by following formula:

\begin{equation}

\Delta\mathrm{OilPrice}_t = \epsilon_t + 0.5 \cdot \epsilon_{t-1}

\end{equation}

If there is a hurricane at time t, $\epsilon_t > 0$ so $\Delta\mathrm{OilPrice}_t > 0$. It means that oil prices increases (may be due to supply) at time t.

At time (t+1) $\epsilon_{t+1} = 0$ (no hurricane), first term will be zero but there will be half effect from previous state.

\begin{equation}

\Delta\mathrm{OilPrice}_{t+1} = 0 + 0.5 \cdot \epsilon_{t}

\end{equation}

So there will be increase in oil price. This may be because supply still not able to recover properly from the last state.

At time (t+1) $\epsilon_{t+2} = 0$ (no hurricane), $\Delta\mathrm{OilPrice}_{t+2} = 0$ .

So we can see that effect from time t propagate to two states t and (t+1) respectively.

## Example 3:

Let $X_t$ represents “number of shoppers at time t” $\mathrm{Shoppers}_t$ and $\epsilon_t$ represents shoppers coming at shop at time t. If MA(1) for the $\mathrm{Shoppers}_t$ is given by following formula:

\begin{equation}

\mathrm{Shoppers}_t = \epsilon_t + 0.5 \cdot \epsilon_t-1

\end{equation}

if at time t shoppers are coming to the shop, $\epsilon_t > 0$ so $\mathrm{Shoppers}_t > 0$. It means that total number of shoppers increases at time t.

**Properties of MA(1) Process: Stationary and Weakly Dependent**

1. Expected Value must be time independent.

\begin{equation}

\mathbb{E}(X_t) = \mathbb{E}(\epsilon_t) + \theta \cdot \mathbb{E}(\epsilon_{t-1})

\mathbb{E}(\epsilon_t) = 0 \rightarrow \mathbb{E}(X_t) = 0

\end{equation}

2. Variance must be time independent.

\begin{equation}

\mathrm{Var}(X_t) = \mathrm{Var}(\epsilon_t) + \theta^2 \cdot \mathrm{Var}(\epsilon_{t-1})

\mathrm{Var}(X_t) = \sigma^2 + \theta^2 \cdot \sigma^2

\end{equation}

3. Covariance also must not be function of time t.

\begin{equation}

\mathrm{Cov}(X_t, X_{t-1}) = \mathrm{Cov}(\epsilon_t + \theta \cdot \epsilon_{t-1}, \epsilon_{t-1} + \theta \cdot \epsilon_{t-2})

\mathrm{Cov}(X_t, X_{t-1}) = \theta \cdot \mathrm{Cov}(\epsilon_{t-1}, \epsilon_{t-1}) = \theta * \sigma^2

\end{equation}

for h > 1

\begin{equation}

\mathrm{Cov}(X_t, X_{t-h}) = \mathrm{Cov}(\epsilon_t + \theta \cdot \epsilon_{t-1}, \epsilon_{t-h} + \theta \cdot \epsilon_{t-h-1})

\end{equation}

4. To prove weakly dependence we have to show that

\begin{equation}

\lim{h\to\inf} \mathrm{Corr}(X_t, X_{t-h}) = 0

\end{equation}

from (3), we can see that $h > 1$, value is zero. Hence we can say that this MA(1) process is stationary and weakly dependent.

# Auto regressive (AR) model:

An AR model is a model in which the value of a variable in one period is related to its values in previous periods.

The previous values included are called lags and are important for time series analysis. For e.g.: The money in your account in one month is related to the money in your account in the previous month.

The characteristic equation for AR (p) model is as follows:

\begin{equation}

y_t = c + \sum_{t=1}^p \gamma_t \cdot y_{t-1} + \epsilon_t

\end{equation}

where, p = No. of lags or the number of previous values of the variable taken

c = constant

$\gamma_t$ = coefficient of the lag variable at the previous time

$\epsilon_t$ = error term at time t, Error term is included to compensate for white

noise

Stationarity: A random variable that is a time series is stationary if its statistical properties are all constant over time. A stationary series has no trend, its variations around its mean have a constant amplitude, and it wiggles in a consistent fashion, i.e., its short-term random time patterns always look the same in a statistical sense.

Let’s define 2 functions for the time series:

Autocorrelation function (ACF): It is the ratio of covariance of $y_t$ and $y_{t-k}$ to their variances

\begin{equation}

\rho_k = \mathrm{Corr} (y_t, y_{t-k}) = \frac{\mathrm{Cov}(y_t, y_{t-k}}{\sqrt{\mathrm{Var}(y_t)\cdot \mathrm{Var}(y_{t-k}}}

\end{equation}

# Random Walk

Lets suppose a university student, Aayush, is terribly drunk and is unable to walk properly. Assume he can take a step either left or right only, and he chooses it randomly. The question is, on average, where will he be after time t with respect to the place where he started? Will he be on average around the origin, or will he be further away as time increases?

Random Walk is a process that follows:

\begin{equation}

X_{t} = X_{t-1} + \epsilon_t

\end{equation} , where $\epsilon_t$ = iid(0,$\sigma^2$)

which means that the current state is the previous state with some added noise.

## Examples

Another simple example is a robot in a 1 D, that can move only left or right only in integer steps. It decides it action in a random manner, by tossing a coin. If its head it moves +1, if its tail, it moves -1. This tossing of coin is represented by $\epsilon$ (taking values +1, -1 ensures that expected value is 0).

$X_t$ is the position of the robot at any time t. $\epsilon_t$ is the result of the coin toss at that time.

Inititally robot is at origin, therefore $X_0 = 0$

With this setup, we can answer questions like what will be the probability that robot is at position, $X_5$ = +3 at time t = 5.

People have used this model on market prices:

*“…The random walk theory states that market and securities prices are random and not influenced by past events….”*

*“…The random walk theory proclaims that it is impossible to consistently outperform the market, particularly in the short-term, because it is impossible to predict stock prices…”*

## Formal Definition

More generally,

Random Walk is an AR-1 process with $\rho = 1$. AR-1 process:

\begin{equation}

X_{t} = \rho \cdot X_{t-1} + \epsilon_{t}

\end{equation}

,$ \epsilon_{t} = iid (0, \sigma^2)$

With $\rho = 1$,

\begin{equation}

X_{t} = X_{t-1} + \epsilon_{t}

\end{equation}

## Non Stationarity

Setting $\rho = 1$ , makes the process non-stationary. Let us see why, there are three conditions for stationarity,

- Mean should be constant, $\mathbb{E}(X_t) = constant$
- Variance should be constant $\mathbb{V}(X_t) = constant$
- Covariance should be constant $\mathbb{C}(X_t,X_{t+h}) = constant $

**Expectation**

Expanding the above equation:

\begin{equation}

X_{t} = X_{t-2} + \epsilon_{t-1} + \epsilon_{t}

\end{equation}

\begin{equation}

X_{t} = X_{0} + \sum_{i=0}^{t-1} \epsilon_{i}

\end{equation}

\begin{equation}

\mathbb{E}(X_t) = \mathbb{E}(X_0) + \mathbb{E}(\sum_{i=0}^{t-1} \epsilon_{i})

\end{equation}

We know, (because of linearity)

\begin{equation}

\mathbb{E}(\sum_{i=0}^{t-1} \epsilon_{i}) = \sum_{i=0}^{t-1} \mathbb{E}(\epsilon_{i})

\end{equation}

And, (because of our assumption $\epsilon = iid(0, \sigma^2)$)

$\mathbb{E}(\epsilon_i) = 0$

Therefore, $\mathbb{E}(X_t) = \mathbb{E}(X_0) = $constant (w.r.t time)

*The expectation does not voilate the stationarity condition.*

**Variance**

Similarly we can see that variance of $X_t$ :

\begin{equation}

\mathbb{V}(X_t) = \mathbb{V}(\mathbb{X_0}) + \sum_{i=0}^{t-1}(\mathbb{V}(\epsilon_{i})

\end{equation}

$\sum_{i=0}^{t-1}(\mathbb{V}(\epsilon_{i}) = t \cdot \sigma^2$

Hence here the stationary condition is violated, as we can see that the variance vary with time.