import imageio import glob import numpy as np import tensorflow as tf from tensorflow.keras import datasets, layers, models import matplotlib.pyplot as plt trainImages =  for imagePath in glob.glob('C:/Users/razva/*.png'): image = imageio.imread(imagePath) trainImages.append(image) trainImages = np.array(trainImages) f = open('C:/Users/razva/train.txt') trainLabels = f.readlines() for i in range(len(trainLabels)): trainLabels[i] = int(trainLabels[i]) trainLabels = np.array(trainLabels) validationImages =  for imagePath in glob.glob('C:/Users/razva/*.png'): image = imageio.imread(imagePath) validationImages.append(image) validationImages = np.array(validationImages) f = open('C:/Users/razva/validation.txt') validationLabels = f.readlines() for i in range(len(validationLabels)): validationLabels[i] = int(validationLabels[i]) validationLabels = np.array(validationLabels) mean_image = np.mean(trainImages, axis = 0) sd = np.std(trainImages) trainImages = (trainImages - mean_image) / sd mean_image1 = np.mean(validationImages, axis = 0) sd1 = np.std(validationImages) validationImages = (validationImages - mean_image1) / sd1 model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10)) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) history = model.fit(trainImages, trainLabels, epochs=10, validation_data=(validationImages, validationLabels))
I have this cnn for image classification, trainImages and trainLabel(from 0 to 8) are training data, validationImages and validationLabels are for test. Images are 32 * 32. I cant make this algorithm work, tell me if u observe more errors pls.
I can’t tell exactly where the problem is since I have no access to the loaded images, but the issue is that you are providing samples without the “channel” axis which in the specified
input_shape=(32, 32, 3) is indicated as having size 3. Each sample (image) must have 3 dimensions (width, height, channels) but, on the contrary, you are passing samples with just 2 dimensions (width and height).
This is most likely due to the fact that you are probably loading gray-scale images with just one channel, which is not explicitly assigned an axis by numpy. If this is the case, make sure that both trainImages and validationImages have shape (32, 32, 1), otherwise just expand the last dimension with
np.expand_dims(_trainImages_, axis=-1) (same for the validation set) before feeding them to the model. Accordingly, adjust to (32, 32, 1) the input_shape in the first Conv2D layer.
Hope it helps, otherwise let me have further details.