How to make prediction based on model Tensorflow lite?

I would like to make a prediction with my Tensorflow lite model. for that I’ve already trained my model and saved this in tflite. Know I would like to make a preditcion with my trained model. How can I do that? Ive tried something but its showing a error message

hand = model_hands.predict(X)[0] – ‘str’ object has no attribute ‘predict’

model_hands = 'converted_model.tflite'
with open(model_hands, 'rb') as fid:
    tflite_model = fid.read()

interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
right = result.rightHand.hand
row = list(np.array([[res.x, res.y, res.z] for res in right]).flatten())
X = pd.DataFrame([row])
hand = model_hands.predict(X)[0]
e_result = np.argmax(hand)
prob = str(round(hand[np.argmax(hand)], 2))

Answer

The problem is in the line hand = model_hands.predict(X)[0]. You are trying to call function predict on a string you defined above as model_hands = 'converted_model.tflite'.

I believe what you want to do is load the model using an Interpreter, set the input tensor, and invoke it. Take a look at the following tutorial for more information: https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python

# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path='converted_model.tflite')
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Set up your input data.
right = result.rightHand.hand
input_data = np.array([[res.x, res.y, res.z] for res in right]).flatten()

# Invoke the model on the input data
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()

# Get the result 
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)

e_result = np.argmax(hand)
prob = str(round(hand[np.argmax(hand)], 2))

Note that you may have to modify the code snippet. I did not test it. But the gist of it is that you have to use set_tensor, invoke, and get_tensor on the interpreter.