How to translate an image without using cv2.warpAffine()? [Python 3 and OpenCV 4.1]

I am trying to translate an image (from a video capture) 100px left by cutting off the 100px on the left and adding 100px of black pixels on the right so that the image stays the same size. I know this can be done with cv2.warpAffine() and a translation matrix, but doing this to each frame is adding high amounts of latency. I have read that there may be a way to do this using cv2.copyTo(), but I am still not sure how exactly this can be done, whether it is using copyTo or another method. Thanks!

This is done in Python 3 with OpenCV 4.1

Current (slow) method:

translation_matrix = np.float32([[1, 0, x_shift], [0, 1, y_shift]])
img = cv2.warpAffine(img, translation_matrix, (w, h))


Create a new image that is the same size as the old image, but index into your old image such that you start indexing column-wise at 100 pixels. After, make sure you assign these pixels such that we start at the first column in the new image but going up to the 100th last column.

import cv2
import numpy as np

# Define your image somewhere...
# ...
# ...

img2 = np.zeros_like(img)
img2[:,:-100] = img[:,100:]

The above code creates a new image called img2 that is initialised to all zeroes that is the same size as the original input image img. After, we copy pixels starting from column 100 of the original image to the end, then assign this to all columns in the target image from the first column up until the last 100. The rest of the columns afterwards will still be all zero which effectively gives you black pixels.