# Statistical Physics [11/17/2015]

### Thermodynamics of complex systems

The topic of this lecture is  Introduction of Statistic Physics. Statistic Physics is a course.In Statistical Physics, one thing that people do is to explain the microscopic phenomena, microscopic interaction of the agents of the atoms and how this was accomplished and how we expect to accomplish those things in our study of our social systems. In Statistic Physics, what people normally do in Physics is we use the least amount of laws to explain the most amount of phenomenons that we encounter.

The beauty of this is that if we only have the least amount of laws, then our possibility of making errors will be minimized, and if we know that those laws have been tested or have been refuted by researchers of many generations, so this means that there is something about those laws that we can rely on. In terms of the thermodynamics, people previously didn’t know what was happening. It turned out to be something statistics.

Prior to the 20th Century, the ideas that people normally think that given us enough measure or enough tool, sufficient tool to observe everything, we’ll be able to explain everything through Newton’s three rules, but after the beginning of the 20th Century, people began to think of information and think of statistics. The prevailing idea including the idea of today is that we’re not able to observe everything about our system, so this means that we will have to talk about probability. What is the probability for us to observe certain things?

In terms of the thermodynamics, what people have observed they include the following, so we have Joe. Previous people didn’t know what is heat so some people say that heat is perhaps some kind of substance like atoms that move from one place to another place. This turned out to be not true because we can construct many counterexamples. For example, when we have ice melting into water, it seems like everything is the same and so where is those atoms or where did those atoms … from, right? This is one example.

Another example is that it seems that people found that people could generate infinite amount of heat, so this means that there’s no conservation law about the number of heat because we can generate the heat infinitely. There was this person named Joe. Joe constructed a way to turn physical energy into heat and he also constructed a way to turn electrical energy into heat. This means that those things could be interchanged. Another finding that was pretty important is this Carnot Circle four stages.

We could use those four stages to either translate some heat into mechanical energy or translate mechanical energy back into heat. Based on all of those things, what people found out is ultimately something that look like the following. We have the internal energy of the system and this energy is a functional volume of course because if we compress the system, if we expand the system, we change the configuration of the system and we change the physical composition. There’s another observation about thermodynamics physics, which is that a system always goes … Heat always goes from high temperatures.

If we have two systems together, heat goes from the subsystem with higher energy to the subsystem of lower energy, so this means that there is a flow kind of thing in terms of heat. Based on all of those things, people ultimately find things in this kind of fashion. Here we’re using first order approximation. This is in terms of temperature, temperature times the change of entropy and the pressure times the change of volume. What people think here, the mind frame is that … First, it is the first order approximation and second, we have two different types of variables.

One type of variable is called the intensive variables and another part of variable is called extensive variables. When we combine subsystems into one, the value of an extensive variable of the combined system equals the sum of values of the subsystems,  the differences in intensive variables between subsystems drive the change in corresponding extensive variables. Extensive variables include things such as weight. If we put two systems together, then the weight of this combined system is the weight of those two subsystems. Another kind of variable is called the intensive variables. For example, pressure is intensive variable because if we put two subsystems of different pressures together, then the difference of the pressure will drive the movement of the molecules from the subsystem of a higher pressure to the subsystem of lower pressure. There are several things that are very frequently mentioned or very frequently used. For example:

$d U(S, V) = \left({\partial U\over\partial S}\right)_V d S + \left({\partial U\over\partial V}\right)_S d V$,

where temperature $T=\left({\partial U\over\partial S}\right)_V$and pressure $p=-\left({\partial U\over\partial V}\right)_S$ are intensive variables, entropy and volume are extensive variables.

There are several things that are very frequently mentioned or very frequently used.

First, which is the $\textbf{free energy}F = pV = U – T S$ is Legendre pair of internal energy $U$, and $T$ and $S$ are the Legendre conjugate variables.

We know that it’s pretty hard to turn the energy associated with entropy into mechanical energy that we can use. Part of energy associated with entropy is something that we cannot use, but the rest part of the energy is something that we can use. We call that part of energy “free energy.” If we think of the form of free energy, so what we find out is that it has multiple occasion of intensive variable and extensive variable. Then we have two things, one is free energy and one is this internal energy. If we recall our lecture on convex optimization and convex deal and variational method, what we find out is that the free energy is a Legendre-Fenchel pair of the internal energy.

Second,which is the $\textbf{Enthalpy}H(S,p) = U(S,V) + p V$ is Legendre pair of internal energy $U$, and $p$ and $V$ are the Legendre conjurage variables. Entropy which is the part that we cannot use, entropy is the Legendre-Fenchel pair of the internal energy and the Legendre conjugate variables are $P$ and $V$.

Similarly, $\textbf{Gibbs free energy}G(T,p) = U – T S + p V$ is Legendre pair of internal energy $U$, with $(T,p)$ and $(S,V)$ being Legendre conjugate variables.Gibbs free energy is the convex pair. We also it call Legendre pair convex pair. Gibbs free energy is the convex function pair of the internal energy with the conjugate variables being two-dimensional. We have on the one hand $T$ and $P$, both are “temperature” and “pressure.” Both are intensive variables and we also have $S$ and $V$, which is entropy and volume. Both are extensive variables.

Now the question is that, so it is not very surprising that here we have the convex optimization here in our variation method because the idea of convex optimization ultimately came from here. Similarly for example, we think of graphic models that we talked about previously in this class and the idea of graphic models is also from Statistic Physics. Here I just want to review on the interpretation of the Legendre Transform. We have two so we’re given a function $F$ and $F$ is a convex function.

We define the Legendre Transform :

$f: I\to \mathbf R$ is a convex function.

$f^*(x^*)=\sup_{x\in I} (x^*\cdot x-f(x))$is its Legendre transform.

so a Legendre transform has different variables. We define the Legendre Transform of this one at point $x^*$ is the supreme of when we multiply the intensive variable with the extensive variable. We multiply a variable with its pair and subtract function $F$ from the variable and take the supreme. We get the Legendre Transform at the conjugate variable. If we think of, in this form, we have on the $X$ axis we have variable $V$ and on the $Y$ axis we have variable $P$.

We can express variable $V$ in terms of $P$, and we can express variable $P$ in terms of variable $V$, and we have the area underneath this line here, and this is monotonically increasing function. The area under this line here, we express this as $L$. The area to the left-hand side of this line we express this as $P$. What we have is that $L$ is the integral from zero to $V$. Understand $L$ is the area underneath and $H$ is the area to the left. What we have is $L$ of $V$ plus $H$ of $P$ equals $P$ times $V$. If we think of it in another way, so what we have is $H$ of $P$ equals $2P$ times $V$ minus $L$ of $V$. $L(v)=\int_0^v p(v’)dv’$ $\Rightarrow {\partial L\over\partial v}=p$

$H(p)=\int_0^p v(p’)dp’$ $\Rightarrow {\partial H\over\partial p}=v$

$H(p) = p\cdot v-L(v)$

$L(v)+H(p) = p\cdot v$

This is another interpretation of the Legendre Transform. What we want to notice is that we have two functions, $L$ and $H$, and $L$ and $H$ are Legendre pairs, and $L$ and $H$ take different variables. If we think of Legendre Transform of function that takes a different state of variables, so not necessarily the variables of the original functions. This is one interpretation of the Legendre Transform. Actually, they have names. $H$ is what people normally call energy, so Hamiltonian and $L$ is the Lagrangian so $L$ and $H$ have different names. function $y=f(x)$ corresponds to

envolop $y=p\cdot x-f^*(p)$ where

$p=f'(x)$ is slope of $f(x)$ at $x$

$f^*(p)$ is intercept determined by $p$

$f^*(p)$ is convex in $p$

$f(x)=f^{**}(x)$ if $f(x)$ is convex

if we compare two. People first understand thermodynamics before people think of it in terms of Legendre Transform. We have the science of those things that look pretty weird but people first have the equation of the increment of energy and then people later find out that all of those things have the same form which is in terms of the Legendre pairs. If we take $T$ and $S$ as Legendre conjugate variables, we get free energy. If we take another set of Legendre conjugate variables we get another pair of Legendre functions.

This the main interpretation that we talked about before and in this one we have a function $Y$ here and we want to find out the Legendre Transform in this one, so what we do is we first give a envelope and then we move the envelope up until we first touch the line of this function and this function is a convex function, so this slope will touch the function at some time for the first time. At the time that this slope which is identified as $P$, at the first time that the slope touches the function, we use the intercept at the $Y$ axis as the function; that is the Legendre pair corresponding to the slope.

This is another interpretation of this idea. Those interpretations are actually the same. What we have is $F$, so the Legendre function at $P$ plus the original function at $X$. We know that $P$ and $X$ are Legendre pairs. If we multiply those things together, then the total length is this one which is $X$ times the slope, $P$ of $X.$This is another interpretation of this. As we mentioned, people have found many interesting things about the thermodynamics but the question people had was, how do we explain the thermodynamics using the least amount of laws, the least amount of assumptions to make an interpretation of this?

### Boltzmann relation and partition function

Here $E$ is a scale of variable which is the energy but $E$ could be other things. It could be a long feature, so what we know is that maximum entropy probability given as the set of features, the sufficient statistics $E,$ is going to take a exponential form. We have a set of parameters, the natural parameters corresponding to those features, beta and $E.$ Given $E$ we can find out beta and given beta we can find out $E.$ According to Boltzmann the probability of a system to be in energy, $E$ of $J,$ so $J$ is an index of the energy, so if we think of a system with a finite number of molecules, then the number of energies is finite and we can actually number those energy configurations.

According to Boltzmann it looks like the following form.

$P_j = \exp(-\beta E_j) / Z$, where – $P_j$ is the probability of observing the $j^\mbox{th}$ state with energy $E_j$

$Z=\sum_j \exp(-\beta E_j)$ is the **partition function**.

$Z$ here is called the partition function and it is the normalization constant to make it probability. The conventional way and the classical way to demonstrate that we have this distribution is the following. We consider a small cylinder and this cylinder is what we consider and we put this small cylinder in a heat reservoir. We have considered this small cylinder there and energy as $E$ of $J$ at this, it is in energy configuration $J$ and the total energy.

The energy here is energy of $R$ which is the total energy minus $E_j$ so the total energy here is $E$ and we assume that the total energy in this duration, we assume that the total energy of the whole system of reservoir and the small tube is constant so $E_R$ plus $E_J$ equals to $E$.

The following one is a very critical assumption which says that in a system all configurations are equally like. Of course we have constraint, which is all configurations should have the same energy, which is energy $E,$ right? This is very critical and this is actually what we have previously identified as maximum entropy. In a system, okay, so maximum entropy, all configurations are equally likely.

$\log P_j \propto \log\Omega(E-E_j) = \log\Omega(E)-\beta E_j+\cdots \Rightarrow P_j\propto \exp(-\beta E_j)$. Now what we do is to take the first order approximation so around the energy “E,” so we have log of “E.” This is the first energy turn here and beta naturally comes out from this as the partial derivative of the log partition function. I should say log number of configuration over energy and from here we can say that the probability for the system to be in energy “I” is proportional to some exponential form over there.

${\partial \log Z\over \partial \beta} = -\sum_j E_j P_j = -U$, where $U$ is internal energy

${\partial \log Z\over \partial V} = {1\over Z} \sum_j {\partial\over\partial V}\exp(-\beta E_j(V))= \beta \sum_j p_j P_j = \beta p$ where $p_j$ is the **pressure** associated with the $j^\mbox{th}$ state.
– $d\log Z= {\partial\log Z\over\partial\beta} d\beta + {\partial\log Z\over\partial V} d V=-U d\beta+\beta p d V = -d(\beta U) + \beta \underbrace{(d U + p d V)}_{T d S}$, or $T d S = {1\over\beta} d (\log Z+\beta U)$, or $S=k \log Z + {U\over T}$, where $k={1\over\beta T}$ is Boltzmann’s constant, or $U-T S=k T \log Z = F$ is free energy. – $S = k\log Z + k\beta U = K \log Z + k \sum_j \beta E_j P_j = -k \sum_j P_j \log P_j$ entropy relationship.

# Sample from Probability Distributions [08/10/2015]

Today our topic is how we sample different probability distributions. It is always good to know how we sample them especially if we work with some, for example some platforms that does not have the corresponding distribution defined or does not have a function that could enable us to get different properties. The first way that we sample our probability distribution is through inverse transform. The inverse transform works in the following way. We have a random reliable which is X. Here, $F(x)={\bf P}(X\le x)$ represents the cumulative distribution function.

What are the properties of  $F(x)$?

It is a function. First it is a function between zero and one, because if $F(x)$ is the probability, X less than or equal to X. The probability for this event to happen. If we talk about probability, a probability is something that is, according to that one equal to zero and smaller or equal to 1. That’s why this is always between zero and one. The second property is that, this is always now decreasing and the reason is that, by talking about probability, we talk about positive, a probability mass function, density function, those are positive. This means that we do not cope with negative probability or apart with negative probability.

It is always increasing. We also know that as X tends to minus infinity we have zero, because nothing hides as a minus infinity. If we go to, if we proceed to positive infinity we get 1 because assuming because nothing hides there. That is the probability, cumulative probability function.

### INVERSE TRANSFORM

What is Inverse Transform?

$F^{-1}(y) = \inf\{x|F(x)\ge y\}$ where $0\le y\le 1$ is the inverse of $F$

What we do in the method of inverse transform, is that, we first take the inverse of $F(x)$. After we take the inverse of $F(x)$, so it is something F minus one to Y and that is defined as the infimum or minimum, the smallest number Y. The infimum of let’s say, the infimum of X, such that, $F(x)$ is greater than or equal to Y.

It’s not a problem if we cope with a continuous function but, if we cope with for example, function that hits support. A finite number of, that takes support on a finite set, then we’ll have to differentiate between minimum or maximum. For example, this one what we have is, I think, whenever we have a mass, we have a junk here, it is something that is, let’s say, that is left continuous.

Here is the inverse transform of uniform random variable $U$ on $(0,1)$

$\bf P(x)=F^{-1}(U)$ follows the distribution of $F(x)$

proof:

\begin{align} &{\bf P}(\{F^{-1}(U)\le x\}) \\ = & {\bf P}(\{U\le F(x)\}) \\ = & F(x) \end{align}

We first saw, we know that the cumulative distribution function is from zero to 1 and what we do is we first sample on the Y axis. After we sample on the Y axis, we find out the corresponding value on the X axis and we take this X and this X has the cumulative distribution function, that is $F(x)$.

This is another example.

$F(x)=1-\exp(-\lambda x)$ is cumulative distribution function of a exponential random variable.That is how we get a sample from explanation distribution through this method of inverse transform.

$\bf P(x)=F^{-1}(U) = -{1\over\lambda}\log(1-U)=-{1\over\lambda}\log U_1$

Here, I’m sampling from a randomized a uniform distribution. I’m sampling from a uniform distribution and then I make the transform and then I compare the quantiles of those two functions and we find out that the quantiles house are equal. This means that, this just intuitively help us to verify that. This is the right way to sample a function where if no one cumulative distribution function from a uniform distribution.

### Acceptance-Rejection

Another way to sample from a distribution is, a method called acceptance, rejection. Suppose that we have a probability density function which is $g(x)$. The random variable $Y$ has this probability of density function. Suppose that whenever we sample this $g(x)$, we either accept, no matter whether we make a sample of $Y$, we either accept this $Y$, or we reject this I. The probability of accepting this $Y$ is $h(y)$. Here, what we want to show is that, the result of accepting the $Y$ is a random variable and it has probability density function which is $g(x)\times h(x)$. The reason is the following.

The cumulative distribution of function of $x$ is going to be the $P(X\leq x)$ . That one has probability density function which is $\int_{-\infty}^x g(x) dx$.Accepting Y could be, we could have different radar of acceptance corresponding to different values of this Y. We need to multiply this one by $H(x)$ and that is the probability but then we need to normalize this one.

We need to multiply our normalization constant and the normalization constant is going to be this one from $\int_{-\infty}^X h(x)g(x) dx$ We know that the sum is going to be $\int_{-\infty}^\infty h(x)g(x) dx$. That’s how we claim that the probability density function of $Y$ is $g(x)\times h(x)$.

In order to do that, we first sample from a uniform distribution, distributed or supported by the interval zero and one and then accept this sample point of uniform distribution according to this function. After we do this, we know that the result will be a beta distribution. Here, I just, you run the code and here I have plotted the real distribution of $B(4,3)$ here and then I have plotted the distribution that I have estimated from acceptance rejection method, which is the red line. I’ve also created another way of sampling a beta distribution which is through other statistic.  We can see that those three lines are pretty close and as the sample size turns to infinity the red line and the green line should approach this black line here. You can study the code afterwards.

### COMPOSITION

There are also other methods of sampling distribution. Let’s suppose that we have cumulative distribution function which is $F$.

$F=\sum_1^k p_i F_i$ is cumulative distribution function, where $\sum_i p_i=1$

$F$ is a mixture of $K$ distributions and $F_1$, to $F_k$. We can decompose $F$ into a mixture of distributions, then we can sample this $F$ by first sample from the category. Which category will we sample?

After we sample from the category, like we know that we want to sample the Jth category then we sample $x$ from $FJ$. $X$ has cumulative distribution function which is $F$ because, if we find out the cumulative distribution function for $X$. Here we apply, we replace this, the probability $J$ is equals to $j$ with the probability for us to take a sample from the Jth component, we get $F(x)$, which is what we require.
– We first sample $J$ from categorical distribution $(p_1,\dots,p_k)$ then
sample $X$ from $F_J$
– $X$ has the distribution defined by $F$
$P(X\le x)=\sum_j P(X\le x|J=j)P(J=j)=\sum_j P_j(X\le x)p_j=F(X)$

Example of sampling through the method of composition is that, suppose that we want to make a sample that is pretty irregular, looks like the following.

$f(x) = 0.5+2\cdot (1-0.5)x$

From zero to 1 and it takes, then we have this one here and then we have this one here. Because we, this is the PDF and we require the area, is under this line to be one. We know that it’s PDF of a continuous random variable supported by the interval zero and one.

One way to sample this one is to find out if it is a composition of two distributions. In order to do that, we first make a sample of which distribution you want to sample. Either you want to sample from first distribution or you want to sample from second distribution. After we find out, we sample which distribution we want to sample proceed to either sample first distribution or we sample second distribution. The key to implement is actually pretty easy. We can study this code.  ### Special Properties

There are other ways to sample a random variable and this is according to the properties of a random variable.

Representing random variable $X$ in terms of other variables easily generated. For example:

– Sample from **binomial distribution** as the sum of i.i.d. Bernoulli trials
– Sample from **Poisson distribution** as the number of i.i.d. realizations of an exponential distribution before total time exceeds unit time
– Sample from **Erlang-$k$ distribution** as the total time of $k$ i.i.d. realizations of an exponential distribution
– Sample from **beta distribution** $B(k,n+1-k)$ as the $k^{\rm th}$ smallest number in $n$ numbers uniformed distributed in $[0,1]$

Let’s talk about sample from a Poisson distribution as the number of ideal realizations of exponential distribution before we have reached a certain amount of time.

The reason is that, a Poisson distribution is the distribution of the number of Poisson events. Meaning that if we put the intervals of exponential distributions, heads to tale together until we reach, until we get from one end of interval to other of the interval. We count the number of intervals here, inside this small intervals here, inside this interval and the number as a poisson distribution.

Similarly, this distribution that people normally use to model the origin of phone calls. We can sample our own distribution as a total time of ideal realizations of exponential distribution. This is an example that we sample Poisson distribution according to the property of Poisson distribution. Here X is a sample of Poisson distribution from exponential distributions and Y is a sample from the Poisson distribution directly. If we plot the quantile plot of those two, we find out that they are pretty similar and they’re the same type of distributions. They are also other methods to sample from a probability distribution which we’ll talk about later.  ### Test

They are also other methods to sample from a probability distribution which we’ll talk about later. For example, we have, you might have already heard of it, so which is give sampler or we might use something that is called metropolis hastings sample which is very similar to this rejection acceptance sample that we talked about before.

In order to understand those two ways to sample from a random variable, we first need to understand on some concepts about stochastic process because those two ways of sampling from random variable probability distribution depends on asymptotic properties of a stochastic processes. Meaning that, we sample a process, this is a random process, again and again until we lose all memory of where we have started and at that time, we know that the distribution of the theme of some statistic that we sample observes the distribution that we are interested in.

We have talked about many different ways of sampling from probability distributions. Even more important thing is, if we want to work with probability, we first want to know which probability distribution should we choose. It’s a random variable of exponential distribution or is it Poisson distribution or is it a random variable of some other distributions. This involves statistics or testing. We have a hypothesis which is that, a random variable observed some distribution and we want to test whether it is more likely that this random variable has this distribution or less likely and reject the unlikely cases.

Here’s just an example of hypothesis testing. A hypothesis testing involves the following concepts.

– **belief/null hypothesis**: a coin is fail
– **experiment**: toss coin for 10 times, resulting a **sample** of 10 tosses
– **test statistic**: number of heads.
– **significance level**: probability of rejecting hypothesis/belief when it is true
– **cretical region/region of reject**: reject hypothesis when statistic falls into this region.

In order to conduct the hypothesis testing, we first need to have a belief about what we have here, without the belief we cannot do anything and we also call this belief a now hypothesis. Like, if we want to test whether a coin is a fail coin meaning that we get the same, approximately the same number of heads and tales or I say it another way, whether we get the same number of heads and tales if the number of samples sides turns to infinity.

Suppose that this is our hypothesis and after we have a hypothesis, we proceed to conduct experiment. An experiment involves that we toss these coins for ten times, like ten times, twenty times that’s not infinite amount of times. After this experiment we get a sample which is ten tosses of a coin. After this sample, like we can get head, tale, head, tale, something like that. After this test, after we get this sample, we want to conduct some, we want to come up with some test of statistic. One intuitive way of finding out whether it is more likely or less likely for us to come out with a sample is we find out the probability for us to get the sample but it will not work.

The reason is that, under the now hypothesis that we have a failed coin, let’s say, I think there’s a type. It’s not fail, a coin is a fail. It’s a type of coin that’s fail. One intuitive idea, we want to find out the probability for us to get a sample of ten tosses. The higher the probability, the more likely that the hypothesis is true. That will not work because suppose that we have a failed coin then the probability for us to get any sample of ten tosses is going to be ten to the power of, let’s say point five to the power of ten, so it will not work. There is another way, which is, which involves just statistics.

If we say our test of statistic is the total number of heads in the ten tosses, so this looks more promising. If a coin is a failed coin, then we’re likely to get five heads and five tales or four heads and six tales or six heads and four tales and it is very unlikely for us to get one head or two heads, or nine heads or ten heads.

The reason is that, by using the test of statistic which is the number of heads, we have a binomial distribution, something like that. We know that this area’s pretty likely and this area is pretty unlikely and this area is pretty unlikely. Now we find some promising test of statistic. The important thing for this test of statistic is that, we know the probability distribution of this test of statistic under the assumption of the now hypothesis, which is that, the distribution is binomial distribution.

We can proceed to reason about whether the hypothesis, the now hypothesis is more likely or less likely. As we said if it is in this area, and from this area and from this area, it is less likely. No matter what corresponding to each significance level, we always get some kind of critical region of rejection. If it is on this or on number of cases, in this region number of cases in this region, we reject our now hypothesis which is that the coin is a fail. Other wise we say that the hypothesis is compatible with our sample or our sample is compatible with the hypothesis. First the way that we choose significant level is arbitrary. We can choose what type of or significant levels we could choose for example 1.0, 1.001 and corresponding to different significance level we have different critical regions, a region of rejection.

In doing the statistical test, we always have the risk of either rejecting this hypothesis when it is correct or accepting this hypothesis when it is wrong. Like for example, when the hypothesis is correct and the coin has indeed failed, there is the possibility for us to get into this area and to get into this area always exists but we rejected those things. In addition, if the hypothesis that the coin is failed, is wrong, the coin is not failed, it is possible for us to get into this region and we get a number of answer which is in this region we have to accept the hypothesis as something that is compatible but it is wrong.

We’re coping with probability here and actually I just revealed test. Actual thing that I want to talk about is that, since we’re working with probability distribution, we always care about how good a sample fits probability distribution. We have a sample, we want to know whether the probability distribution is a good fit to the sample or a bad fit to the sample. This is a standard Gaussian distribution. I randomly sampled ten points from distribution. Another Gaussian distribution that means zeros and standard deviation, one which is, has the same distribution with this one. We can say that the sample represented by and the red dots looks pretty compatible with this distribution. We have made another sample which according to normal distribution which has mean, average of value one and standard deviation, point five which is represented by and the blue bus and we can say that the sample represented by the blue bus is a bad fit to the standard normal distribution here.

We can talk about different things. The first thing is we can talk about parametric. We can conduct parametric goodness of the test, which says that, suppose that we know that our sample is from a normal distribution, then whether or not we have got the right mean value of the distribution. Whether or not we have got the right variance or standard deviation in our distribution. This involves parameters, so we call it parametric test. Similarly to our example of testing whether a coin is a fail or not, one important thing is that, in order to check a test of statistic, we want to know the probability density function or the cumulative probability function of this test of statistic. Here for example, if the test of statistic is the average value, then we want to first find out the PDF or CDF of this test of statistic before we can set out a level of acceptance and find out the critical region or the region of rejection.

Here, if we’re talking about the, for example if we know that our sample is from a Gaussian distribution, and if we know the variance of this Gaussian distribution, then our estimator which is … $X_1$ to XI are random variables. Our estimator is far as some function of those random variables thus it is also a random variable. Whenever we talk about random variable, we can talk about the distribution, right, the probability of this random variable.

Let’s say, since we know that those X are taken from a Gaussian variable, and there’s a property of Gaussian random variable, which is a linear combination of some of those Gaussian random variables is also Gaussian random variable. We know that this one is a Gaussian random variable and the minimum of this Gaussian random variable of X [ ] of E. E is going to be E1 over N. We have this one so we first take this N out and then we take this submission sign out of this E so we get to this one. Name of SI and since we know that all of those are example of the same distribution, so we know that this is going to be times $N$ times nil, so that is going to be nil.

That is the average value of this $X$. Similarly we can find out the variance of this $X$. The variance of this x but here, yeah. Similarly we can find out the variance of this $X$ and the variance of this $X$ is, let’s say, it’s going to be more complicated but I think it is going to be this one. This means that $X$ bar is going to be a normal distribution with … I just write it this way. $X$ is going to be a normal distribution with this one and this one. We can actually $X$ bar. We can find out, we can actually plot the PDF of this $X$ bar. For example, if $X$ bar, it’s a random variable it can’t be anywhere. If it’s here then it’s more likely. Yes

### Goodness-of-fit test

Similarly, so suppose that we do not know the standard deviation of this Gaussian random variable from which we get our sample. Then one idea is to normalize the deviation of this X from by dividing it with the standard deviation. As a result we no longer get Gaussian random variable but we get key statistic that observes another probability the density distribution. This is parametric goodness of a fit test which supposes that, which assumes that we know the underlying probability distribution but we need to fix some parameters and want to check whether the parameters are compatible with the sample. Here we can also talk about number metric goodness of fit. Meaning that we want to fit a sample, we want to see whether a sample is compatible with a distribution. There are two ways to do this. One is what we call Kolmogorov test which is based on …

If we plot the probability let’s say the cumulative probability function, cumulative distribution function of random variable X and a cumulative distribution function of random variable Y, it could be something like this. If they are the same distribution then we could get a straight line here. The deviation of this plot of the sigma of X against sigma of Y gives us some signal about, some indication about to what extent those two distributions of different distributions. The Kolmogorov test, the statistic that is taken back, the Kolmogorov test is actually the supreme, the maximum deviation from the curve. That is, sigma for $X$ against sigma for $Y$ to this line, $Y$ equals to $X$. The interesting thing is that, if the sample size turns to infinity, we actually know the underlying perform of this Kolmogorov distribution.

This is how we test whether two distributions, two random variable observe the same distributions to samples of the same distribution from using Kolmogorov test. Another test that we talk about is this so called the goodness of fit test and that test uses a statistic which is a chi square statistic and which observe a chi square distribution. The chi square distribution is a distribution that generates from the sum of the square of standard normal distribution. We have $X_1$ to $X_N$ which standard normal distribution meaning that there was a thousand distribution with mean zero and standard deviation 1. If we sum the square of all of those distributions together, we get a chi square distribution with the grey of freedom which is $N$. The reason that we can use chi square test to test goodness of fit of many distributions is that, first we can use chi square test to test whether a distribution fits a binomial distribution. Then we can, if we can use it to test goodness of fit of a sample to a binomial distribution then we can extend the result to multi-normal distribution.

When we work with continuous distribution, what we do is to cut the real line into intervals, into beams. Ultimately we end up with working with either binomial or multi-nomial distributions. That is the goodness of fit chi square test. Also, when we work with different distributions, we need to estimate the parameters. We first talk about testing whether the parameters are right parameters and whether the distributions are right distributions. Now we talk about how, that there are many different ways to estimate the parameters.

### Parameter Estimation with Maximum Likelihood

One way that has been used quite often is the maximum likelihood estimated. In order to estimate, make the maximum likelihood estimation with the parameters, we first write off the likelihood which is the probability of the sample condition on the parameters and then we maximize the likelihood. In order to maximize the likelihood, we normally take the derivative of this likelihood over the parameters supposing that the likelihood is a continuous function of the parameters. Then we set this derivative to be zero and then we can find out the parameters. What we know is the maximum likelihood distribution is a asymptotically normal, meaning that, when the sample size goes to infinity, we have a normal distribution. It is efficient estimator in the sense that first it is, meaning that the variance of this estimation is a random variable as we do that before. We want to find estimation that has smaller variance than other distributions.

What we know is that, the maximum likelihood estimator which is random variable has the minimal of variance over many other similar estimators. Those I want to talk about in this lecture and in the next lecture we’ll proceed to talk about different stochastic processes as well as how we sample from those stochastic processes.

# Probability Distributions [09/08/2015]

In this lecture, we’ll talk about several interesting probability distributions. I assume that perhaps you have already heard of those distributions before, but what I’m going to tell you is how those distributions come into being. After you know how those distributions come into being, you will not feel those distributions as mysterious. Another message that I want to tell you is that what I find is that engineering students, including me many, many years ago, normally just pick a distribution and use this distribution for random, without thinking about what is the connotation of such probability distribution. I hope that after you know how those probability distributions that I talk about in this lecture come into being, you will think about how the other distributions come into being. Then it will help you reason about the distributions that you will work with in the future.

### Distributions from repeated Bernoulli trials

Let us tossing a coin repeatedly. Whenever we toss a coin, we conduct a Bernoulli trial, so we have, as a result, we could have a head up or we could have a tail up. We assign probability $p$ to the event that we get a head up. We assign probability $1-p$ to the events that we get a tail up. We call a head up as a success. We call a tail up as a fail. If we toss this coin for many, many times then we get interesting random variables and distributions. Formally speaking, let us use random variable $X$ to denote the result of a Bernoulli trial. $X$ can take two values: $X$ takes 1 with probability $p$ ($P(X=1)=p$) and $X$ takes 0 with probability $1-p$ ($P(X=0)=1-0)$.

What is the mean value and variance of this Bernoulli trial?

= \exp(u\cdot B(t))\cdot \mathbf E(\exp(u(B(t+s)-B(t))))$Brownian motion has strong Markov property: for any finite stopping time$T$,$\displaystyle P(B_{T+t}|F_T)=P(B_{T+t}|B_T)$. The strong Markov property is similar to the Markov property, except that in the definition a fixed time t is replaced by a stopping time T. The proof of the strong Markov property can be done using the exponential martingale and the Optimal Stopping Theorem. ##### 8. Hitting Time and Exit Time Hitting time$\tau_x$denotes the first time$B(t)$hits level$x$:$\tau_x=\inf\{t>0:B(t)=x\}$Exit time$\tau$denotes the first time$B(t)$exit an interval$(a,b)$:$\displaystyle \tau=\min(\tau_a,\tau_b)$Claim: Consider a Brownian motion process$B_x(t)$starting from$a<x<b$and an exit time$\tau=\min(\tau_a,\tau_b)$. We have$P_x(\tau<\infty)=1$and$E_x\tau<\infty$Corollary:$P_a(\tau_b<\infty)=1$and$P_a(\tau_a<\infty)=1$. ###### 8.1 Distribution of Hitting Time Let the maximum and minimun of Brownian motion$B_0(s)$starting from 0 on$[0,t]$be$M(t)=\max_{0\le s\le t} B(s)$and$m(t)=\min_{0\le s\le t} B(s)$. Let$\tau_x=\inf\{t>0: B(t)=x\}$. Then for any$x>0$, we have$\displaystyle P_0(M(t)\ge x)=2\cdot P_0(B(t)\ge x)=2(1-\Phi({x\over\sqrt t}))$. Proof: Note the equivalence of two events:$\{M(t)\ge x\}=\{\tau_x\le t\}$, and that$P(B(t)\ge x)=P(B(\tau_x+(t-\tau_x))-B(\tau_x)\ge 0, \tau_x\le t)=P(\hat B(t-\tau_x)>0)\cdot P(M(t)\ge x)$by strong Markov property. Corollary:$P(B(t)\le 0, \forall 0\le t\le 1)=0$The probability density of$\tau_x$is inverse gamma density$\displaystyle f_{\tau_x}={|x|\over\sqrt{2\pi}}t^{-{1\over2}}\exp(-{x^2\over 2t})$, and$E \tau_x=\infty$. Hence although$x$is visited with probability 1, the expected time for$x$to be visited is infinite. Proof: Note the equivalence of two events:$\{M(t)\ge x\}=\{\tau_x\le t\}$. Hence$P(\tau_x\le t)=P(M(t)\ge x)=2P(B(t)\ge x)=\dots$. Differentiate over$t$. ##### 9 Reflection Principle Let$T$be a stopping time. Define$\hat B(t)=B(t)$for$t\le T$and$\hat B(t)=2B(T)-B(t)$for$t\ge T$. Then$\hat B$is also a Brownian motion. Below, Figure 2 shows this principle. Figure 2: Reflection Principle of Brownian Motion ##### Reference: Fima C. Klebaner (2005). Introduction to Stochastic Calculus with Applications. Imperial College Press. Peter Tankov and Rama Cont (2003). Financial Modelling with Jump Processes. Chapman & Hall/CRC Financial Mathematics Series. On Course web site # Brownian Motion Calculus: Defining Integration & Differential of Brownion Motion [11/10] ### Ito Integral on a Non-Random Simple Process Let us take an example. Let’s consider a time period [0, T ] partitioned into n intervals of equal length$\Delta t = T/n$with endpoints$ t_{k}= k\Delta t,$k = 0, …, n. These are the times at which the market is open and trade in shares can take place. An investor buys$q(t_{0})$shares at time 0 at a price of$S(t_{0})$each. At time$t_{1}$, market trading establishes a new share price$S(t_{1})$. Once this price has been revealed the investor can change the quantity of shares held, from$q(t_{0})$to$q(t_{1})$. The same at the subsequent times$t_{2}$through$t_{n−1}$. At time$t_{n}$the entire portfolio is liquidated as soon as share price$S(t_{n})$becomes known. A portfolio of$q(t_{k})$shares is held from just after the share price$S(t_{k})$has been revealed, to when the share price$S(t_{k+1})$at time$t_{k+1}$becomes known. This portfolio is worth$q(t_{k} )S(t_{k})$at time$t_{k}$and$q(t_{k})S(t_{k+1})$at time$t_{k+1}$. So from time$t_{k}$to time$t_{k+1}$, with the change in share price$S(t_{k+1})$−$S(t_{k})$denoted by$S_{k}$, the portfolio value changes by$q(t_{k}) S_{k}$. The gain over all trades can then be expressed as$I_n = \sum_{k=0}^{n-1} q(t_{k}) \Delta S_{k}$. This expression is an example of a discrete stochastic integral. At each trading time, the quantity of shares to be held until the next trading opportunity arises, is decided by the investor. Some investors will buy shares and others will sell as a trade requires two parties. The factors which an investor takes into account in determining the shareholdings are unknown, so q is a discrete-time random process. Thus the stochastic integral is constructed from the two random processes q and S: $$\sum_{k=0}^{n-1} q(t_k )[ \mu S(t_k) \Delta t + \sigma_S(t_k ) \Delta B(t_k )] = I_n$$ Increasing n makes the time step t smaller and discrete-time trading approaches continuous-time trading. The first term is a summation with respect to time step$\Delta t$. The second term is a summation with respect to Brownian motion increment [B($t_{k+1}$) − B($t_{k}$)]. Let’s take time period [0, T] partitioned into n intervals of equal length t = T/n, with endpoints$t_{k}$=$k\Delta t$, k = 0, . . ., n. The simplest integrand f is a step-function whose values are non-random, so the only random quantity is the Brownian motion integrator. Step-function f can be written with indicator notation as $$f = f(t_{0}) 1_{[t_0,t_1)} + · · · + f (t_{k} ) 1_{[t_k ,t_{k+1})} + · · · + f (t_{n−1}) 1_{[t_{n−1},t_n )}$$ where$1_{[tk ,tk+1)}$denotes the indicator function which has value 1 on the interval shown in the subscript, and 0 elsewhere. Define the discrete stochastic integral$I_{n}( f )$: $$I_n( f ) = \sum_{k=0}^{n-1}f(t_{k} )[B(t_{k+1}) − B(t_{k} )]$$ Therefore, integral of a non-random process over Brownian motion is the limit of the integrals of non-random simple processes approaching this non-random process. ### Ito Integral on a Simple Adapted Process Let f be at random level$f(t_{k}, ω)$for$t_{k}$< t <$t_{k+1}.$When the history of the Brownian motion process becomes known progressively at each time$t_{k}$, it must be possible to determine$f (t_{k})$from this history alone. The current terminology is that$f (t_{k})$must be adapted to the filtration ($t_{k}$). Properties: • Linearity:$\int_0^T (\alpha X(t) +\beta Y(t)) d B(t) = \alpha \int_0^T X(t) d B(t) + \beta \int_0^T Y(t) d B(t)$for constant$\alpha$and$\beta$and simple adapted processes$X(t)$and$Y(t)\int_0^T 1_{(a,b]} d B(t) = B(b)-B(a)$, and$\int_0^T 1_{(a,b]} X(t) d B(t) = \int_a^b X(t) d B(t)$• Zero mean property$\mathbf E\int_0^T X(t) d B(t) = 0$• Isometry property:$\mathbf E\left(\int_0^T X(t) d B(t)\right)^2=\int_0^T \mathbf E(X^2(t)) dt$### Ito Integral of Regular Adapted Processes After looking at the specific case of simple adaptive process, we want to find the Ito integral for general integrands i.e. any regular adapted process. So let’s assume$X^n(t)$to be a sequence of simple processes convergent in probability to the process$X(t)$. $$X^{(n)}(t)\rightarrow X(t)$$ Then, under some conditions, the sequence of their integrals$\int_0^TX^n(t)dB(t)$also convergence in probability. Hence, if, $$\int_0^TX^n(t)dB(t)\rightarrow J^n$$ Then $$\int_0^TX(t)dB(t)\rightarrow J$$ For Example: Find$\int_0^T B(t)dB(t)$: let’s assume$0=t_0^n<t_1^n<t_2^n< … <t_n^n = T$be a partition of [0, T] and let- $$X^n(t) = \sum_{i=0}^{n-1} B(t_i^n)I_{(t_i^n, t_{i+1}^n)}(t)$$ Also$\delta_n = max_i(t_{i+1}^n – t_i^n)\rightarrow 0$then$X_n(t)\rightarrow B(t)$is approximately the continuity of the Brownian paths. Hence, $$\int_0^TX^n(t)dB(t) = \sum_{i=0}^{n-1} B(t_i^n)(B(t_{i+1}^n)- B(t_{i+1}^n))$$ where, $$B(t_i^n)(B(t_{i+1}^n)- B(t_{i+1}^n))=1/2(B^2(t_{i+1}^n) – B^2(t_{i}^n) – (B(t_{i+1}^n) – B(t_{i}^n))^2)$$ Therefore, $$\int_0^TX^n(t)dB(t)$$ $$= \sum_{i=0}^{n-1} 1/2(B^2(t_{i+1}^n) – B^2(t_{i}^n) – (B(t_{i+1}^n) – B(t_{i}^n))^2)$$ $$=1/2(B^2(T) – 1/2(B^2(0)) -1/2 \sum_{t=0}^{n-1}(B(t_{i+1}^n) – B(t_i^n))^2$$ Here, the last summation part converges to the quadratic variation T of Brownian motion on [0, T] in probability. Therefore$\int_0^T B(t)dB(t) = lim \int_0^T X^n(t)dB(t) = 1/2B^2(T) – 1/2T$Properties: Any regular adaptive process has the same properties as we already discussed for the simple adaptive process. • Linearity:$\int_0^T (\alpha X(t) +\beta Y(t)) d B(t) = \alpha \int_0^T X(t) d B(t) + \beta \int_0^T Y(t) d B(t)$for constant$\alpha$and$\beta$•$\int_0^T 1_{(a,b]} X(t) d B(t) = \int_a^b X(t) d B(t)$• Zero mean property:$\mathbf E\int_0^T X(t) d B(t) = 0$• Isometry property:$\mathbf E\left(\int_0^T X(t) d B(t)\right)^2=\int_0^T \mathbf E(X^2(t)) dt$### Ito Integral Process $$Y(t)=\int_0^TX(s)dB(s)$$ (Equation A) where, $$\int_0^TX^2(s)ds<\infty$$ (Equation B) In simple terms, ito intergal is a way of integrating a stochastic process variable with respect to another stochastic process that result into a new stochastic process. Here, X(s) is a square-integrable process (hence the equation B) that is adapted to the filteration generated by B(s). Standard calculous doesn’t apply to such functions since the function X(S) is not differentiable at any point and has infinite variateion over every time interval. Ito integral plays crucial part in mathemetical finance to evaluate strategies to make descisions over time variable stochastic processes or Brownian processes such as stoch prices. Properties: • Martingale Property: Probabilities of current state doesn’t depend on any knowledge of past events, given that it’s square is inegral is finite. I.e. Expected value of the state Y at time t doesn’t depend on the given$F_s$. And hence it is a stochastic variable Y(s) only. $$E(Y(t)|F_s) = Y(s)$$ • Quadratic Variation: The quadratic variation of an Ito processes can be given by$\int_0^t X^2 (s)ds$### Ito’s Formula for Brownian motion We know that B(T) is a Brownian motion from 0 to T and since we know the first order derivative and second order derivate of F then we can find out the value of Brownian motion at time T as in the example below. Example: $f(B(t)) = f(0)+\int_0^t f'(B(s)) d B(s) + {1\over 2}\int_0^t f”(B(s)) d s$ where: $\exp(B(t)) = 1+\int_0^t \exp(B(s)) d B(s) + {1\over 2}\int_0^t \exp(B(s)) d s$ Proof of the above example is: $f(B(t)) = f(0) + \sum_{i=0}^{n-1} \left(f(B(t^n_{i+1}))-f(B(t^n_{i}))\right)$ After taking the first order derivative, $\hphantom{f(B(t)) } = f(0) + \sum_{i=0}^{n-1} f'(B(t^n_{i}))\left(B(t^n_{i+1})-B(t^n_{i})\right) + {1\over 2}\sum_{i=0}^{n-1} f”(\theta^n_{i})\left(B(t^n_{i+1})-B(t^n_{i})\right)^2$ For $\theta^n_{i}\in \left(B(t^n_{i}),B(t^n_{i+1})\right)$ $\sum_{i=0}^{n-1} f'(B(t^n_{i}))\left(B(t^n_{i+1})-B(t^n_{i})\right) \to \int_0^t f'(B(s)) d B(s)$ After taking the second order derivative and as$n\to\infty$we have the below equations $\sum_{i=0}^{n-1} f”(\theta^n_{i})\left(B(t^n_{i+1})-B(t^n_{i})\right)^2\to \sum_{i=0}^{n-1} f”(B(t^n_{i}))\left(B(t^n_{i+1})-B(t^n_{i})\right)^2$ In probability as$n\to\infty$then $\sum_{i=0}^{n-1} f”(B(t^n_{i}))\left(B(t^n_{i+1})-B(t^n_{i})\right)^2 \to \sum_{i=0}^{n-1} f”(B(t^n_{i}))\left(t^n_{i+1}-t^n_{i}\right)$ Hence $\sum_{i=0}^{n-1} f”(B(t^n_{i}))\left(B(t^n_{i+1})-B(t^n_{i})\right)^2\to \int_0^t f”(B(s)) d s$ ### Ito’s Process and Stochastic Differentials We know B(t) is a Brownian motion from$0<t<T$. An Ito-process Y(t) from 0≤t≤T is given by $Y(t) = Y(0) + \int_0^t \mu(s) d s + \int_0^t \sigma(s) d B(s)$ or $d Y(t) = \mu(t) d t + \sigma(t) d B(t)$ where$0\le t\le T$,$Y(0)$is$F_0$-measurable,$\mu(t)$and$\sigma(t)$are$F_t$-adapted,$\int_0^T|\mu(t)| d t<\infty$and$\int_0^T\sigma^2(t) d t <\infty$. Example: Taylor’s formula written at the second order for Brownian motion reads as: Chain rule:$d(f(B(t)))=f'(B(t)) d B(t) + {1\over 2}f”(B(t)) d t$, for$f$with continuous 2nd derivative:$d \exp(B(t)) = \exp(B(t)) d B(t) + {1\over 2} \exp(B(t)) d td B^2(t) = 2 B(t) d B(t) + dt$### Ito’s Formular for Ito processes Let X be an Ito process with stochastic differential then$d X(t) = \mu(t) d t + \sigma(t) d B(t)$for 0 ≤ t ≤ T. Let ($f\in C^2$) then the stochastic differential of the process f exists and can be written as $$\begin{eqnarray*} df(X(t)) &=& f'(X(t))dX(t)+\frac{1}{2}f”(X(t))d[X,X](t)\\ &=& f'(X(t))dX(t)+\frac{1}{2}f”(X(t))\sigma^{2}(t)dt\\ &=& \left(f'(X(t))\mu(t)+\frac{1}{2}f”(X(t))\sigma^{2}(t)\right)dt+f'(X(t))\sigma(t)dB(t) \end{eqnarray*}$$ ### Ito’s Integral and Gaussian process If x is a non- random process satisfying$\int_0^T X^2(s) d s <\infty$and Y is a Gaussian process where$Y(t)=\int_0^t X(s) d B(s)$Then it hass a zero mean and the covariance can be written as below:$\mbox{Cov}(Y(t), Y(t+u)) = \int_0^t X^2(s) d s$,$t,u\ge 0$# The Theory of Speculation [11/12] The Theory of Speculation The focus of Bachelier’s work (The Theory of Speculation) was to establish the probability law for the price fluctuations that the market admits at this instant. Indeed, while the market does not foresee fluctuations, it considers which of them are more or less probable, and this probability can be evaluated mathematically. There are innumerable influences on the movements of stock exchange. Stock Exchange acts upon itself, implies: its current movement is a function not only of earlier fluctuations, but also of the present market position. The determination of these fluctuations is subject to an infinite number of factors: it is therefore impossible to expect a mathematically exact forecast. (An excellent translation of the original paper: http://www.radio.goldseek.com/bachelier-thesis-theory-of-speculation-en.pdf) A few important terms defined Spot Price: The current price at which a particular security can be bought or sold at a specified time and place. (A securities futures price is the expected value of the security.) Futures Contract: A contractual agreement to buy or sell a particular commodity or financial instrument at a pre-determined price in the future. Option: offers the buyer the right, but not the obligation, to buy (call) or sell (put) a security or other financial asset at an agreed-upon price (the strike price) on a specific date (exercise date). Mathematical advantage: Mathematical expectation does not provide a coefficient representing, in some sense, the intrinsic value of the game, it only logically tells whether a game will be profitable or not, Hence the concept of Mathematical advantage. We can understand mathematical advantage from the example of a gambler’s odds as explained in the paper by Louis Bachelier: Define the mathematical advantage of a gambler as the ratio of his positive expectation and the arithmetic sum of his positive and negative expectations. Mathematical advantage varies like probability from zero to one, it is equal to 1/2 when the game is fair. Mathematical expectation of gain / (mathematical expectation of gain + absolution value of mathematical expectation of loss). The mathematical expectation of the speculator is zero The General Form of probability curve The probability that a price y be quoted at a given epoch is a function of y. The price considered by the market as the most likely is the current true price. Prices can vary between −xo and +∞: xo being the current absolute price. It will be assumed that it can vary between −∞ and +∞. The probability of a spread greater than xo being considered a priori entirely negligible. The probability of a deviation from the true price is independent of the absolute level of this price, and that the probability curve is symmetrical with respect to the true price. Only relative prices will matter because the origin of the coordinates will always correspond to the current true price. Probability law of Financial Product Method 1: • px,tdx: probability for price to be in the interval [x,x+dx) at time t px,tdx denotes the probability that, at epoch t, the price is to be found in the elementary interval x, x + dx. The desired probability is therefore $p_{x,t_1}p_{z-x,t_2} dx dz$ • Markov property: pz,t1+t2dz = −∞px,t1pz−x,t2dxdz Because at epoch t1, the price could be located in any the intervals dx between −∞ and +∞, so this is the probability of the price z being quoted at epoch t1 + t2. • Homogeneous Gaussian process: $p_{x,t} = {1\over 2\pi k\sqrt t}\exp\left(-{1\over 4\pi k^2 t}\right)$ Where $k=\int_0^\infty p_{x,1} dx$ is the gain at time 1. Method 2: Alternate derivation of the Probability Law • conduct m Bernoulli trials in [0,t), with success rate p Suppose that two complementary events A and B have the respective probabilities p and q = 1 – p. The probability that, on m occasions, it would produce α equal to A and m − α equal to B, is given by the expression. ${P}(\alpha; m,p) = {m\choose \alpha}p^\alpha q^{m-\alpha}$ • mathematical expectation of gain $\sum_{h\ge0}\mbox{P}(mp+h;m,p)$ $= pq\cdot\left(\partial_{p}\sum_{h\ge0}\mbox{P}(mp+h;m,p)-\partial_{q}\sum_{h\ge0}\mbox{P}(mp+h;m,p)\right)$ $= \frac{m!}{(mp)!(mq)!}p^{mp}q^{mq}mpq$ The quantity h is called the spread. Let us seek for the mathematical expectation for a gambler who would receive a sum equal to the spread whenever this spread be positive. We have just seen that the probability of a spread h is the term from the expansion of (p + q)m in which the exponent of p is mp + h, and that of q is mqh. To obtain the mathematical expectation corresponding to this term, it is necessary to multiply this probability by h. Now, h = q(mp + h) − p(mqh) mp + h and mqh being the exponents of p and q in a term from (p + q)m. • Apply Stirling’s formula $n!=e^{-n}n^n\sqrt{2\pi n}$ ${E}={dh\over \sqrt{2\pi m p q}}\exp\left(-{h^2\over 2 m p q}\right)$ # Cities and Complexity [10/27] Prepared by Avinav Sharan, Himanshu Sharma and Gaurav Kshirsagar Rank-Size Population Distributions And Dynamics of Cities We analyse how the shift in population of a given city happens with time given their ranks. We try to rank them by their population size.\\\\ We define some parameters to represent the population and their ranks. \\$P_{1}(t),…..,P_{N}(t)$: populations of N cities at time t$r_{1},….,r_{N}$: population ranks of cities in decreasing$P_{i,r_{i}=1}≥P_{j,r_{j}=2}≥P_{m,r_{m}=N}$: population sorted in decreasing order$\Gamma(t+1)=\sum_{i}|p_{i,r_{i}(t+1)}(t+1) – p_{i,r_{i}(t)}(t)|$: percentage shift in population, where$pi,r=P_{i,r}/\sum_{i}P_{i,r}\wedge(t+1)=\sum_{i}|r_{i}(t+1)-r_{i}(t)|/N$percentage shift in rank order insert image 1 insert image 2 We plotted the$\log(rank)$vs$\log(population)$for 458 standardized administrative areas from 1901 to 1991. We can see that it shows a power law distribution between rank and population. The curve is very stable as we can see from the plot. As 1901 and 1901 the curve is similar with same coefficient even with population of the cities changes.\\ How can we explain this above pattern based on microscopic agent behaviour and interaction. If we can explain then we can define the model for population development:\\ insert image 3 We will try to explain one of the multiple interpretation. let increment in population is some random variable with Gaussian distribution.\\ If we have the initial state (population at time t = 0,$P_{i_{0}}$) and increment of population at time t = 0 is$\epsilon_{0}$and for city 1 increment is$\epsilon_{1}$and so on.$\log P_{i}(t) = \log P_{i}(0) + \sum_{\tau=0}^t \epsilon_{i}(\tau)$Also assume mean is$\mu$and variance is$\sigma^2$, than we get the log normal distribution. (meaning logarithm of the population is normal distribution) Now simulate the same number of cities, 458 cities and we start from a uniform population distribution. In each city we simulate the population growth according to Gaussian distribution. We sample the scaling factor of all of the 458 cities and we simulate from 991 AD to 1991 and what we find out is indeed that we have some kind of nice straight line of the log population shear versus log ranks. It is one feasible interpretation, hypothesis about city development. Explaining city growth with Ants Model How can we explain the above model (growth in population of cities)? insert image 4 We use this agents based model to explain the growth of cities. In the picture we have nest in the center, and three food sources. In starting ants move around randomly until they reach the food source. They carry the food and leave the chemical trails while returning for the other ants. So that they can find the food source easily. We can see the simulation of this model in the NetLogo tool.\\ We can identify many things with this model. First is that nearest food resource will be probabilistically explored first and furthest will be explored last. Second, whenever ant find it and they will leave trails. And It will have large concentration of the chemicals. So we can relate this ant model with the behaviour of the people. Formulation of Ants Model Following random variables represent the Ants model: Position of any ant at time t :$A(x_i(t),y_i(t))$Home of ant i:$A(X_i,Y_i)$Position of resource k :$R(X_k, Y_k)$The chemical concentration of scent at point (x,y) :$\psi(x,y)$There are few things to note in the above representation. ‘Home of ant’ does not vary with time. The chemical concentration of scent is a function of time. Process When the ant is moving back to home with food, it leaves the trail of chemical scent. This is represented by the following equation: \begin{equation} \psi(x_i(t+1),y_i(t+1)) = \psi(x_i(t),y_i(t)) + \phi \end{equation} This equation says that concentration of chemical scent depends on the same at the postion in previous time step, with some added noise$\phi$. Generally, we can represent the ants motion as the following: \begin{equation} x_i(t+1) = x_i(t) + \Delta x_i(t) + \epsilon_x(t) \end{equation} \begin{equation} y_i(t+1) = y_i(t) + \Delta y_i(t) + \epsilon_y(t) \end{equation} That is, the position of ant depends on the gradient of previous time step plus some added noise,$\epsilon$. The gradient,$\Delta x$or$\Delta y$depends on the two cases: When the ant is seeking food: We know that the ant is trying at random (given by$\epsilon(t)$, but following any chemical trail if it finds. This is intuitively represented by the equation: \begin{equation} (\Delta x_i(t), \Delta y_i(t)) = c \cdot \nabla \psi(x_i(t),y_i(t)) + \epsilon(t) \end{equation} When the ant is moving food back: We know that ant follows the path that is dependent on the ant’s home. This is given by the equation: \begin{equation} (\Delta x_i(t), \Delta y_i(t)) = (f(X_i),f(Y_i)) \end{equation} This formulation let us think the ant model probabilistically. Simulation and Observation of Ants Model First we simulate a \textbf{simplified ants model, with one home (at origin) and one food center} (represented by a square box). In the image below we can observe three time stages. Intitially (at t=0), we observe that the ants move in random directions. There are no definite permanent tracks yet.\par As we skip to time, t=200, we observe that ants have found the food and the track has started to emerge. But the ants are still randomly distributed in the space.\par Finally, at time, t=2000, we observe that the track is well defined and the ants have started to cluster around this track.\linebreak Insert image 5 Next we simulate an \textbf{ant model with random homes and multiple resources}. In the below image, we can clearly observe one of the main properties of cities, that is roads. We can observe star like structures (roads) between homes and resources after sufficient number of iterations. Insert image 6 Next let us make this formulation more interesting and complex. We add following features instead of resources: ‘Industry’ ($I_{xy}$) and ‘Service’ ($S_{xy}$). We also dynamically add population ($P_{xy}$) to the model. ‘Service’ is dependent on ‘Industry’. The population is dependent of ‘Service’ and ‘Industry’. We can model any kind of interaction between Industry, Service and Population. The process of the model is defined by: The Position at any time, depends on previous time step population, the diffusion of population ($\nabla^2P_{xy}(t)$) (laplacian) at previous time step, the interaction of industry and service ($o_{xy}^{(k)}(t)$) and random noise ($\epsilon$). \begin{equation} P_{xy}(t+1) = P_{xy}(t) + \theta^{(P)}\nabla^2P_{xy}(t) + \Sigma_k o_{xy}^{(k)}(t) + \epsilon_{xy}^{(P)}(t) \end{equation} where,$\theta$is diffusion constant Similarly, we can represent the denisty of Indurtrial resouce and Service: \begin{equation} I_{xy}(t+1) = I_{xy}(t) + \theta^{(I)}\nabla^2 I_{xy}(t) + \Sigma_k o_{xy}^{(k)}(t) + \epsilon_{xy}^{(I)}(t) \end{equation} \begin{equation} S_{xy}(t+1) = S_{xy}(t) + \theta^{(S)}\nabla^2 S_{xy}(t) + \Sigma_k o_{xy}^{(k)}(t) + \epsilon_{xy}^{(S)}(t) \end{equation} Observations In the image below we can observe the simulation. Intially, we see the formulation of nuclear cities (at t=100). We observe cluster of population which represent cities. The log-size of the ‘cities’ is linear with log-rank, which we had seen in the british cities data from 1901 to 1991. We also observe the formation of roads. Insert image 7 SandPile model Let us perform the following experiment: Drop sand uniformally onto a table. Gradually, sandpile height increases, and the sand starts to fall in surrounding areas. The key observation is that sand fall can trigger more sand fall and an ‘avalanche’ will occur. In Sandpile Model we try to model the aboce experiment. Model Representation: \begin{equation} X_{i,j} \geq 4: X_{i,j} = X_{i,j} – 4, \end{equation} \begin{equation} X_{i \pm 1,j} = X_{i \pm 1,j} + 1, \end{equation} \begin{equation} X_{i,j \pm 1} = X_{i,j \pm 1} + 1, \end{equation} where$X_{i,j}$is the height of the sand pile. NetLogo Demo: Checkout: http://netlogoweb.org/launch#http://netlogoweb.org/assets/modelslib/Sample%20Models/Chemistry%20&%20Physics/Sandpile.nlogo You can play with it, and observe that small avalanch occur more often than large ones. We study Sandpile model as it is related to many real-life scenarios. For instance, the migration of the population from one city to another can be said to follow sand pile model. Similarly, we can observe it in forest fires, frequency and size of earthquakes, etc. Size of Avalanche based on SandPile model Here we are using the SandPile model to understand the working of avalanches. Attempts to model avalanche behaviour date from the early 20th century, notably the work of Professor Lagotala in preparation for the 1924 Winter Olympics in Chamonix. His method was developed by A. Voellmy and popularised following the publication in 1955 of his Ueber die Zerstoerungskraft von Lawinen (On the Destructive Force of Avalanches). Voellmy used a simple empirical formula, treating an avalanche as a sliding block of snow moving with a drag force that was proportional to the square of the speed of its flow: \begin{equation} Force = 1/2 * \rho \nu^2 \end{equation} Avalanche increases in the following manner having a relation to the previous state \linebreak \begin{equation} A + \phi \rightarrow B \end{equation} \begin{equation} B + \phi \rightarrow C \end{equation} \begin{equation} C + \phi \rightarrow D \end{equation} \begin{equation} D + \phi \rightarrow p_1 D + \phi, p_2 C + 2\phi, B + 3\phi, A + 4\phi \end{equation} The kinetic equations for this growth are: \begin{equation} \dot{n_A} = n_\phi (p_4 n_D – n_A) \end{equation} \begin{equation} \dot{n_B} = n_\phi (p_3 n_D + n_A – n_B) \end{equation} \begin{equation} \dot{n_C} = n_\phi (p_2 n_D + n_B – n_C) \end{equation} \begin{equation} \dot{n_D} = n_\phi (p_1 n_D + n_C – n_D) \end{equation} In general, \begin{equation} \dot{n_\phi} = n_\phi (\bar{p} n_D – 1) + \bar{p}\nu\nabla^2 (n_\phi n_D) + \eta(r,t) \end{equation} where \bar{p} = p_1 + 2p_2 + 3p_3 + 4p_4 \nu = diffusion coefficient \eta = noise Insert image 8 Avalanches in 2D sandpile model.The dark regions show the size of avalanches triggered by a single addition. Though at first glance, it seems that the number of topplings and the area both measure the number of affected sites, a single site may topple more than once in a single avalanche, and hence the two are truly different variables. The duration of the avalanche is defined as the number of updates one must perform on the system before all the sites become stable after the addition of one grain of sand. Video An avalanche triggered which follows SandPile model Avalanche Equilibrium For the avalanche to reach its equilibrium we have the probability of the number of grains affected after t steps Probability generating function: \begin{equation} G_t(\lambda_A,\lambda_B,\lambda_C,\lambda_D) = \sum_{N} (N_A,N_B,N_C,N_D) \lambda^{N_A}_A \lambda^{N_B}_B \lambda^{N_B}_B \lambda^{N_D}_D \end{equation} At time 1, \begin{equation} G_1 = (n_Dp_4)\lambda_A + (n_Dp_3 + n_A)\lambda_B + (n_Dp_2 + n_B)\lambda_C + (n_Dp_1 + n_C)\lambda_D \end{equation} \linebreak At time t+1, the recursive relationship with$G_t is

\begin{equation}
G_{t+1} = (n_A\lambda_B + n_B\lambda_C + n_C\lambda_D) (n_Dp_4)\lambda_A + n_D(p_1\lambda_DG_t + p_2\lambda_DG_t^2 + p_3\lambda_BG_t^3 + p_4\lambda_AG_t^4)
\end{equation}

\begin{equation}
G_\infty(\lambda) = \frac{1-\sqrt{a-4\lambda^2n_D(1-n_D)}}{2\lambda n_D}
\end{equation}

Example: Metroploitan Buffalo population growth (1750-1990)

Insert image 9

It has been discusseed and found that when a system diverges the various parameters of the Sandpile model exhibit a power law distribution for scenarios like avalanches or population. For additional information see this http://guava.physics.uiuc.edu/~nigel/courses/563/Essays_2012/PDF/banerjee.pdf

For such a distribution in case of population, we have the following parameters:
\begin{equation}
N(R) = (2R)^D,
\end{equation}
where $\textit{D}$ = fractal dimension of the city 1\leq\textit{D}\leq2
R = effective radius of city

Cumulative populaiton density:
\begin{equation}
P(R) = N(R)/A(R) = (2R)^{D-2}
\end{equation}
where A(R) = Area of square

Incremental Populaion
\begin{equation}
n(R) = \frac{dN(R)}{dR} = D2R^{D-1}
\end{equation}

Incremental density
\begin{equation}
\rho(R) = \frac{dn(R)}{dR} = D^2R^{D-2}
\end{equation}

Prepared by Hao Fan, Manu Kothegal Shivakumar and Pritika Mehta

In this lecture, we will specific talk about how can we combine agent-based modeling on one hand and vehicle tracking data on other hand to make inference about the states of the roads dynamically.

The resources for us to understand our cities are not only the publication of papers or the books, more importantly are the models, computational tools, and visualization tools. Listed as follow are the open source simulation software.

• MATSim
• TRANSIMS — Transportation Analysis Simulation System
• DynaMIT & MITSIMLab — DynaMIT & MITSIMLab is developed by the civil engineering department at MIT, supported by the department of transportation.
• UrbanSim — UrbanSim is a tool to simulate the migration of people within a city and to simulate the interaction of different industries as well as people. It is pretty useful for people working with cities and working on social dynamics at large-scale.
• SimCity open source — SimCity is a famous computer game, and there are several versions of it. The first version of it was open-sourced, and there is also a java version.

If we work with real world cities, there can be two major problems. The first one is how can we combine those tools to make real time prediction about what will happen, the second one is how can we bridge the gap between the data that we can collect and the data that we will really find useful, and justify that our bridge can lead to convincing conclusions about our cities. In such cases, we need visualization tools to help display our models.

• OpenLayers — OpenLayers is a set of JavaScript to bring tiles of a map from map server.
• MapServer — MapServer is read in the geographical features such as roads, rivers, the boundaries of countries or administrative areas at the server side and then serve those tiles.
• Google Analytics — Google Analytics can help generate all types of dynamic charts about many different things.
• Google Maps API — After we get simulation and visualization tools, the next step is to find useful data. Below are some resources of data that can help us understand the cities.

Census Data

• Demographic & Health Surveys: The Demographics & Health Surveys contains data set created by people from the US about many different countries, it includes information such as the latitudes and longitudes of the different villages of the survey is and surveys such as the annual income, the average education, the birthrates and the death rate and other things.
• US Cenesus Bureau: US Cenesus Bureau contains comprehensive US census data, includes road segments, household statistics, travel patterns and so on.
• Senegal: National Agency of Statistics and Demography (ANSD) ANSD is data for development collaboration, contains it contains additional information about epidemics, employment and many other things.
• Open government initiative

Street Map & Yellow Pages

• Open Street Map: Open Street Map data is especially good in the US and in Europe, but it is also good at many other parts of the world.
• Yellow Pages

Call Detail Records

Data for Development of Ivory Coast and Senegal contains flow of population and the interaction of population in different countries. It can also used to track the spreading of epidemic diseases at country site, or make predictions about the economics.

Vehicle Tracking

• Niagara Frontiers Transportation Authority Developer Tools
• MTA Bus Time: There are many city governments developed some web API for people to extract the locations of the individual vehicles of different public transportation fleet, above are the web API of transportation at Buffalo and New York City.
• NYCT&L Taxi Data: NYCT&L Taxi Data is taxi data with information of latitudes and longitudes of all yellow cabs at Manhattan for over three years at a temporal resolution of one sample per minute. It can also help us understand whether there is a traffic, accident, or other things happening in the city.
• Cabspotting — Cabspotting is a data set that tracks the vehicles at San Francisco.
• mPat sample data
• Mobile Millennium

Mobile Millennium is a data set about the dynamics in the San Francisco Bay Area, including San Francisco, San Jose, Berkeley and many other cities.

In a paper “A survey of results on mobile phone datasets analysis” from Vincent D. Blondel, Adeline Decuyper and Gautier Krings, some advances made recently in the study of mobile phone datasets are reviewed.

Models of the behavior captured by mobile phone data sets

• Diffusion of information and diseases
• Trajectory as Levy flight, exploration vs exploration, very few habitats, predictability of next location, amount of commuting between two counties, neighborhood signatures revealed by calling patterns, movement and geographic information propagation in response to emergency
• Probability of making a phone call, duration of a phone vs distance between the two ends of this phone call, social/ethnic/historical partitioning revealed by phone calls, economic development vs diversity
• Duration of being in phone-contact, burstiness of inter phone call intervals
• Power law degree distribution, clustering/small world (maximizing utility), symmetry, community detection/ dynamics of small and large communities, inferring gender and age from call pattern

Applications of mobile phone data sets

• Urban sensing (o/d matrix, flow of people, classifying trips)
• Tracking diseases
• Viral marketing

In the Transportation Modeling, we want to identify a transportation simulator as event driven model and assign probability to different several paths of this event driven model. Based on this type of ideas if we identify the observation as the observation of the system, then we can think of simulator as hidden Markov process (although this kind of hidden Markov process is pretty complicated) and make inference about what’s happening about this hidden Markov process in terms of the observations. In order to do the simulation and inference, several things are needed.

• Network file: information about roads The road network makes it a really a network. The nodes represent the road intersections, the edges are either the locations (such as buildings) or the road segments. There are road segments from one intersection to another intersection, and at these intersection a vehicle needs to make a decision whether it should go through one branch or another.
• Plans/population file: trips of all individuals between home, work, school etc. by car, walk, bus etc. Things actually determines the dynamic is the plans of people, the individuals. Such as their locations, places they go to on a typical day, the characteristics of the trips, and the times they plan those trips. The plans of the populations determine the dynamics because from the plans and the populations we can find out the state transition matrix
• Facility file: open time, close time and capacity of buildings
• Configuration file: choice to simulate traffic on a road, replan trips, score trips, generate output, etc.

The paper “A large-scale agent-based traffic microsimulation based on queue model” from Nurhan Cetin, Adrian Burri and Kai Nagela provide us a specific case study in terms of how the events are scheduled and how the events are registered against the timer and how we up to date the state of the road segments as well as the states of the agents in response to the different individual events. • Each road has length, free speed and capacity.
• If number of cars ahead is less than capacity, a car can move to the next segment with free speed.
• Otherwise, the car wait (at a queue) until all cars move out.

Another paper “Generating complete all-day activity plans with genetic algorithms” from David Charypar and Kai Nagel provide us a dynamic model in terms of different people choose different trips every day.

• Change trip starting time
• Change route
• Change mode (car, bus, walk)
• Change destination

In this model, people change routes, change the travel mode and change the destination, and the road segments responds to the same demands in different ways at different times according to weather and many other things.

In event driven model we define set of events and events will change the state of the system in different ways. If a sample path of events with corresponding time is given we can find the probabilities.

$p\circ l_1\to p\circ l_2$
This represents one event where person p goes from one location $l_1$ to another location $l_2$.
The state of the system is defined in terms of the location of a person at time t and the number of vehicles on road segment A at time t.
$X^{(p)}_t$ is location of person $p$ in time t
$X^{(l)}_t$ is number of vehicles on road $l$ at time t

If road segment has a lot of vehicles then this vehicle will stay on the road segment A for longer time. If a road segment has a lot of vehicles then the driver might decide to go off to one branch instead of another branch. So the state transition matrix is actually a function of the state of this system. Anyway, we can identify this system in terms of just one event and then we can get the states of the event and from the states we can proceed to get the probability measure of different sample paths.

One way we can calculate probabilities is by using mean field approximation, a projection based method. Based on the model that we defined we will be able to find out the probability for us to make a transition from location $l_1$ to another location $l_2$, conditioned on our observations.

The posterior probability of event $pl_1\to pl_2$ and latent states $x_{t-1,t}^{(l)}$ , $x_{t-1,t}^{(p)}$ conditioned on observations is

\begin{align*}
& P(pl_{1}\to pl_{2},\{x_{t-1,t}^{(l)}:l\},\{x_{t-1,t}^{(p)},p\}|\mbox{obs})\\
& =\prod_{l}\alpha_{t-1}^{(l)}(x_{t-1}^{(l)})\cdot\prod_{p’}\alpha_{t-1}^{(p’)}(x_{t-1}^{(p’)})\\
& \cdot p_{l_{1},l_{2}}\cdot\delta(x_{t-1}^{(p)}=l_{1})\\
& \cdot\beta_{t}^{(l_{1})}(x_{t}^{(l_{1})})\delta(x_{t}^{(l_{1})}=x_{t-1}^{(l_{1})}-1)P(\mbox{obs}_{t}^{(l_{1})}|x_{t}^{(l_{1})})\\
& \cdot\beta_{t}^{(l_{2})}(x_{t}^{(l_{2})})\delta(x_{t}^{(l_{2})}=x_{t-1}^{(l_{2})}+1)P(\mbox{obs}_{t}^{(l_{2})}|x_{t}^{(l_{2})})\\
& \cdot\prod_{l\ne l_{1},l_{2}}\beta_{t}^{(l)}(x_{t}^{(l)})\delta(x_{t}^{(l)}=x_{t-1}^{(l)})P(\mbox{obs}_{t}^{(l)}|x_{t}^{(l)})\\
& \cdot\beta_{t}^{(p)}(x_{t}^{(p)})\delta(x_{t}^{(p)}=l_{2})P(\mbox{obs}_{t}^{(p)}|x_{t}^{(p)})\prod_{p’\ne p}\beta_{t-1}^{(p’)}(x_{t}^{(p’)})\delta(x_{t}^{(p’)}=x_{t-1}^{(p’)})P(\mbox{obs}_{t}^{(p’)}|x_{t}^{(p’)}).
\end{align*}

We can also estimate the probability that we do not have any event in a $\epsilon$ time (very small fraction of time)
The posterior probability of no event and latent states $x_{t-1,t}^{(l)}$ , $x_{t-1,t}^{(p)}$ conditioned on observations is
\begin{align*}
& P(\emptyset,\{x_{t-1,t}^{(l)}:l\},\{x_{t-1,t}^{(p)},p\}|\mbox{obs})\\
& =\prod_{l}\alpha_{t-1}^{(l)}(x_{t-1}^{(l)})\cdot\prod_{p’}\alpha_{t-1}^{(p’)}(x_{t-1}^{(p’)})\\
& \cdot\left(1-\sum_{p’:\mbox{person}}\sum_{\mbox{event }p’l_{1}\to p’l_{2}}p_{l_{1},l_{2}}\cdot\delta(x_{t-1}^{(p’)}=l_{1})\right)\\
& \cdot\prod_{l}\beta_{t}^{(l)}(x_{t}^{(l)})\delta(x_{t}^{(l)}=x_{t-1}^{(l)})P(\mbox{obs}_{t}^{(l)}|x_{t}^{(l)})\cdot\prod_{p’}\beta_{t-1}^{(p’)}(x_{t}^{(p’)})\delta(x_{t}^{(p’)}=x_{t-1}^{(p’)})P(\mbox{obs}_{t}^{(p’)}|x_{t}^{(p’)}).
\end{align*}

The state space is Combinatorial state space and it is big. For example, if we want to model San Francisco we normally have about sixty thousand road segments and if we want the simulation about a city to be realistic we need to have about three hundred thousand vehicles which is approximately 1% of the vehicles running in San Francisco for a whole day. The state space is going to be very big so we have to either use Markov Chain Monte Carlo or Variational method.

There are two ways to make inference using variational method. One is through mean field approximation meaning that the vehicles move around and the road segments change state according to mean field effects. This means that we will project all other things, all other states into just this one individual or one road segment. The problem becomes more tractable.