I've been playing with the Tensorflow library doing the tutorials. Now I wanted to play with my own data, but I fail horribly. This is perhaps a noob question but I can't figure it out.
I'm using this example: [https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3%20-%20Neural%20Networks/convolutional_network.py](https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3 - Neural Networks/convolutional_network.py)
I want to use my own images, for converting my images to use with tensorflow i'm using this: https://github.com/HamedMP/ImageFlow/blob/master/ImageFlow.py
Now I change the parameters in the example from this:
n_input = 784
n_classes = 10
to this:
n_input = 9216
n_classes = 2
I did that because my images are 96 * 96 and there are only 2 classes of my images
I also change the weights and biases to the numbers I need.
I read the data like this:
batch_xs = imgReader.read_images(pathname);
imgReader being the ImageFlow file
but when I try to run it I gives me an error:
ValueError: Cannot feed value of shape (104, 96, 96, 1) for Tensor
u'Placeholder:0', which has shape (Dimension(None), Dimension(9216))
I feel like i'm overlooking something small but I don't see it.
Answers
The error you're encountering, ValueError: Cannot feed value of shape (104, 96, 96, 1) for Tensor u'Placeholder:0', which has shape (Dimension(None), Dimension(9216))
, suggests a mismatch between the shape of the input data and the shape expected by the TensorFlow placeholder.
Here's a step-by-step guide to resolve this issue:
Understanding the Shapes
-
Input Data Shape: Your input data,
batch_xs
, has the shape(104, 96, 96, 1)
, where:104
is the batch size.96
and96
are the width and height of the images.1
is the number of channels (grayscale images).
-
Placeholder Shape: The placeholder expects data with the shape
(None, 9216)
, whereNone
is the batch size (which can be variable), and9216
is the flattened size of each image (since96*96=9216
).
Fixing the Placeholder and the Data Feeding
You need to ensure that the input placeholder can accept the shape of your data directly, or you need to reshape your data to match the expected shape.
Option 1: Reshape the Placeholder
If you want to keep the image dimensions intact, change the placeholder shape to match the shape of your input data.
# Placeholder for input data
x = tf.placeholder(tf.float32, [None, 96, 96, 1]) # Adjusted to match your data shape
Option 2: Flatten the Input Data
Alternatively, you can flatten your input images before feeding them into the placeholder. This will convert your image data from (96, 96, 1)
to 9216
.
Here’s how to do it:
- Flatten the Input Data:
batch_xs_flattened = batch_xs.reshape(batch_xs.shape[0], -1) # Flatten the images
- Ensure Placeholder Matches the Flattened Data:
# Placeholder for input data
x = tf.placeholder(tf.float32, [None, 9216]) # Keep this as is
- Read and Flatten Images:
Make sure the imgReader.read_images(pathname)
function or wherever you're reading the images is adjusted to flatten the images.
Example Code
Here's a complete example with both options. Choose the one that suits your needs:
Option 1: Keeping the Original Image Shape
# Placeholder for input data
x = tf.placeholder(tf.float32, [None, 96, 96, 1])
# Define your model as usual
# ...
# When reading and feeding the data
batch_xs = imgReader.read_images(pathname)
sess.run(train_op, feed_dict={x: batch_xs, y: batch_ys})
Option 2: Flattening the Images
# Placeholder for input data
x = tf.placeholder(tf.float32, [None, 9216])
# Define your model as usual
# ...
# When reading and feeding the data
batch_xs = imgReader.read_images(pathname)
batch_xs_flattened = batch_xs.reshape(batch_xs.shape[0], -1)
sess.run(train_op, feed_dict={x: batch_xs_flattened, y: batch_ys})
Model Adjustment
Ensure that the rest of your model definition matches the shape you choose for your placeholder. If you go with the flattened input, you'll need to adjust the first layer of your neural network to expect the flattened input size.
For example, if using the flattened input approach:
# Network Parameters
n_input = 9216 # 96x96
n_classes = 2 # 2 classes
# Store layers weight & bias
weights = {
'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
'wd1': tf.Variable(tf.random_normal([24*24*64, 1024])),
'out': tf.Variable(tf.random_normal([1024, n_classes]))
}
biases = {
'bc1': tf.Variable(tf.random_normal([32])),
'bc2': tf.Variable(tf.random_normal([64])),
'bd1': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
Ensure the rest of your network, especially the reshaping layers and fully connected layers, are adjusted accordingly.
By following these steps, you should be able to fix the shape mismatch error and proceed with training your model on your custom dataset.