# Average time to hit a given line on 2D random walk on a unit grid

I am trying to simulate the following problem:

Given a 2D random walk (in a lattice grid) starting from the origin what is the average waiting time to hit the line y=1-x

```import numpy as np
from tqdm import tqdm
N=5*10**3
results=[]
for _ in tqdm(range(N)):
current = [0,0]
step=0
while (current+current != 1):
step += 1
a = np.random.randint(0,4)
if (a==0):
current += 1
elif (a==1):
current -= 1
elif (a==2):
current += 1
elif (a==3):
current -= 1
results.append(step)
```

This code is slow even for N<10**4 I am not sure how to optimize it or change it to properly simulate the problem.

Instead of simulating a bunch of random walks sequentially, lets try simulating multiple paths at the same time and tracking the probabilities of those happening, for instance we start at position `0` with probability 1:

```states = {0+0j: 1}
```

and the possible moves along with their associated probabilities would be something like this:

```moves = {1+0j: 0.25,  0+1j: 0.25, -1+0j: 0.25, 0-1j: 0.25}
# moves = {1: 0.5, -1:0.5} # this would basically be equivelent
```

With this construct we can update to new states by going over the combination of each state and each move and update probabilities accordingly

```def simulate_one_step(current_states):
newStates = {}
for cur_pos, prob_of_being_here in current_states.items():
for movement_dist,prob_of_moving_this_way in moves.items():
newStates.setdefault(cur_pos+movement_dist, 0)
newStates[cur_pos+movement_dist] += prob_of_being_here*prob_of_moving_this_way
return newStates
```

Then we just iterate this popping out all winning states at each step:

```for stepIdx in range(1, 100):
states = simulate_one_step(states)
winning_chances = 0
# use set(keys) to make copy so we can delete cases out of states as we go.
for pos, prob in set(states.items()):
# if y = 1-x
if pos.imag == 1 - pos.real:
winning_chances += prob
# we no longer consider this a state that propogated because the path stops here.
del states[pos]

print(f"probability of winning after {stepIdx} moves is: {winning_chances}")
```

you would also be able to look at `states` for an idea of the distribution of possible positions, although totalling it in terms of distance from the line simplifies the data. Anyway, the final step would be to average the steps taken by the probability of taking that many steps and see if it converges:

```total_average_num_moves += stepIdx * winning_chances
```

But we might be able to gather more insight by using symbolic variables! (note I’m simplifying this to a 1D problem which I describe how at the bottom)

```import sympy
x = sympy.Symbol("x") # will sub in 1/2 later
moves = {
1: x, # assume x is the chances for us to move towards the target
-1: 1-x # and therefore 1-x is the chance of moving away
}
```

This with the exact code as written above gives us this sequence:

```probability of winning after 1 moves is: x
probability of winning after 2 moves is: 0
probability of winning after 3 moves is: x**2*(1 - x)
probability of winning after 4 moves is: 0
probability of winning after 5 moves is: 2*x**3*(1 - x)**2
probability of winning after 6 moves is: 0
probability of winning after 7 moves is: 5*x**4*(1 - x)**3
probability of winning after 8 moves is: 0
probability of winning after 9 moves is: 14*x**5*(1 - x)**4
probability of winning after 10 moves is: 0
probability of winning after 11 moves is: 42*x**6*(1 - x)**5
probability of winning after 12 moves is: 0
probability of winning after 13 moves is: 132*x**7*(1 - x)**6
```

And if we ask the OEIS what the sequence `1,2,5,14,42,132...` means it tells us those are Catalan numbers with the formula of `(2n)!/(n!(n+1)!)` so we can write a function for the non-zero terms in that series as:

```f(n,x) = (2n)! / (n! * (n+1)!) * x^(n+1) * (1-x)^n
```

or in actual code:

```import math
def probability_of_winning_after_2n_plus_1_steps(n, prob_of_moving_forward = 0.5):
return (math.factorial(2*n)/math.factorial(n)/math.factorial(n+1)
* prob_of_moving_forward**(n+1) * (1-prob_of_moving_forward)**n)
```

which now gives us a relatively instant way of calculating relevant parameters for any length, or more usefully ask wolfram alpha what the average would be (it diverges)

Note that we can simplify this to a 1D problem by considering `y-x` as one variable: “we start at `y-x = 0` and move such that `y-x` either increases or decreases by 1 each move with equal chance and we are interested when `y-x = 1`. This means we can consider the 1D case by subbing in `z=y-x`.