how to interpret output tensor data packing in tensorflow lite c++?

I am working on a tensorflow-lite model in c++ (I have it working python already) and I was confused about how my output data is packed. I couldn’t find any references in the documentation. I looked in the tflite source and learned I could get the output dimension of my tensor using dims for example

for(int i=0; i < size; i++){
print("%dn", out_tensor->dims->data[i]);
}

This gives me:

1
96
96
14

Which is exactly what I know the output data to be. It is a 96×96 grid where each grid element is 14 floats. What I don’t understand is how to get this data out properly. At first we assumed it was flat and pulled it out like this:

  const float* output = interpreter->typed_output_tensor<float>(0);

  for (int j = 0; j < num_values; ++j) {
    output_data_flat[out_idx++] = output[j];
  }

But this did not seem to come out right. What is the right or at least a clean way to unpack this output data?

Thank you.

Answer

TensorFlow Lite tensor data is stored in a continuous manner, which means that you can assume as it is flatten.

In the other words, you can assume interpreter->typed_output_tensor<float>(0) as float[1][96][96][14].