I am working on a multi-output model and need to check the sample-wise loss of each output branch before calculating the final training loss. How can I achieve this? Right now I am using the model.fit() method and this gives batch-wise total loss and individual loss for each branch.

Below is custom function I am using:

class CustomModel(keras.Model): def train_step(self, data): # Unpack the data. Its structure depends on your model and # on what you pass to `fit()`. x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute the loss value # (the loss function is configured in `compile()`) loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update metrics (includes the metric that tracks the loss) self.compiled_metrics.update_state(y, y_pred) # Return a dict mapping metric names to current value return {m.name: m.result() for m in self.metrics}

## Answer

To obtain the non-aggregated losses, set the argument `reduction`

of your loss function to `tf.keras.losses.Reduction.NONE`

(see options here). For example, if your loss function is `BinaryCrossentropy`

, you can instantiate it as follows:

bce = tf.keras.losses.BinaryCrossentropy(reduction=tf.keras.losses.Reduction.NONE)

Then, calling `bce`

will yield the element-wise losses that you are after.