Python Loading .nii images while saving memory

  • Thread starter Thread starter BRN
  • Start date Start date
  • Tags Tags
    Images Memory
AI Thread Summary
The discussion centers on the challenge of loading large .nii datasets for a Deep Learning project on a PC with limited RAM and swap memory. The user seeks alternatives to load the datasets incrementally to avoid memory overload, suggesting the need for an iterator to load files on demand. They also mention the necessity to adjust the batch size for the DCGAN algorithm due to memory constraints. Suggestions include implementing an iterator in the loss function of the GAN discriminator and utilizing TensorFlow's image loading utilities. The conversation emphasizes the importance of efficient memory management in deep learning tasks.
BRN
Messages
107
Reaction score
10
Hello everybody,
for my Deep Learning exam, I have to develop a project that provides for the generation of 3D images in format .nii using the DCGAN algorithm trained with some real images of MRI scans of brains of patients with Alzahimer.

I have a serious problem. I should load three different datasets that weigh 3GB, 5GB and 8GB respectively. Unfortunately I am forced to use an old PC with only 4GB of RAM and 2GB of Swap memory, so I am impossible to upload the files at one time using this simple code:

[CODE lang="python" title="loading files"]
train_data = []
data_path = './data/ADNI_test/'
for i in range(len(filenames)):
mri_file = data_path + filenames
train_data.append(nib.load(mri_file).get_fdata())
[/CODE]

Would any of you know how to give me some alternative solution? Is there a way to upload the files a little at a time without overloading memory? In DCGAN algorithm the batch size is set to 64 files, but I will certainly have to decrease to 30.

Thank you!
 
Technology news on Phys.org
I wouldn't have thought it would be necessary to have all the image files in memory; can you provide the ML engine with an iterator that loads each file on demand?
 
I hope it can be done, but at the moment I don't know how.
Should the iterator be implemented directly in the loss function of the GAN discriminator?

[CODE lang="python" title="discriminator"]
def discriminator_model(strides, kernel_size, input_img, weight_initializer, downsample_layers):
rate = 0.2
filters = input_img.shape[1]

model = Sequential()

model.add(tf.keras.layers.Conv3D(strides, kernel_size, filters, input_img.shape, padding = 'same',
kernel_initializer = weight_initializer))
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Dropout(rate = rate))

for l in range(downsample_layers - 1):
filters = int(filters * 2)
model.add(tf.keras.layers.Conv3D(strides, kernel_size, filters, padding = 'same'))
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Dropout(rate = rate))

model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(1))
return model

def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
[/CODE]

where 'real_output' would be the real image that is compared with that artificially generated.

Some idea?
 
Thread 'Star maps using Blender'
Blender just recently dropped a new version, 4.5(with 5.0 on the horizon), and within it was a new feature for which I immediately thought of a use for. The new feature was a .csv importer for Geometry nodes. Geometry nodes are a method of modelling that uses a node tree to create 3D models which offers more flexibility than straight modeling does. The .csv importer node allows you to bring in a .csv file and use the data in it to control aspects of your model. So for example, if you...
I tried a web search "the loss of programming ", and found an article saying that all aspects of writing, developing, and testing software programs will one day all be handled through artificial intelligence. One must wonder then, who is responsible. WHO is responsible for any problems, bugs, deficiencies, or whatever malfunctions which the programs make their users endure? Things may work wrong however the "wrong" happens. AI needs to fix the problems for the users. Any way to...
Back
Top