- #1
BRN
- 108
- 10
Hello everybody,
I have this problem:
starting from a vector of 100 random values, I have to generate an image of size 128x128x3 using a model consisting of a fully completely layer and 5 layer deconv.
This is my model
Why do I always get an 8x8x3 image without having an increase in size in each layer?
Thank's
I have this problem:
starting from a vector of 100 random values, I have to generate an image of size 128x128x3 using a model consisting of a fully completely layer and 5 layer deconv.
This is my model
Python:
def generator_model(noise_dim):
n_layers = 5
k_w, k_h = [8, 8]
input_dim = (noise_dim,)
i_w, i_h, i_d = [8, 8, 1024] # starting filters
strides = (1, 1)
weight_initializer = None
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(i_w * i_h * i_d, input_shape = input_dim, kernel_initializer = weight_initializer))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.ReLU())
model.add(tf.keras.layers.Reshape((i_w, i_h, i_d)))
for i in range(n_layers - 1):
print(k_w, k_h)
model.add(tf.keras.layers.Conv2DTranspose(i_d, (k_w, k_h), strides, padding = 'same', use_bias = False))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.ReLU())
i_d = int(i_d / 2)
k_w = int(k_w * 2)
k_h = int(k_h * 2)
k_w = i_d
k_h = i_d
model.add(tf.keras.layers.Conv2DTranspose(3, (k_w, k_h), strides, padding = 'same', use_bias = False))
return model
Why do I always get an 8x8x3 image without having an increase in size in each layer?
Thank's
Last edited by a moderator: