Are the following python code snippets equivalent?

The following has to do with implementing a neural network in python:

def update_mini_batch(self, mini_batch):

    nabla_b = [np.zeros(b.shape) for b in self.biases]
    nabla_w = [np.zeros(w.shape) for w in self.weights]

    for x, y in mini_batch:
        delta_nabla_b, delta_nabla_w = self.backprop(x, y)
        nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]  
        nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]

In the first two lines it initializes two tensors with zeros. Then in the next for loop it updates them by adding each element within (which is a zero) to another element in another similar tensor that is due to the backprop function.

For me this should be equivalent to

    for x, y in mini_batch:
        nabla_b, nabla_w = self.backprop(x, y)

But I can’t really be sure. Both run successfully and the code depends on randomness.

Thank you.

Reference: https://github.com/mnielsen/neural-networks-and-deep-learning

Answer

They are not equivalent. The top one will sum nablas over minibatch. The bottom one will only keep values from the last sample.