- #1
moyo
- 30
- 0
I am trying to come up with a parent loss function for the following neural network model. On top of that the algorithm for processing an image would also be helpful.
The quad-tree compression algorithm divides an image into ever increasingly small segments (squares) and stops in a particular region when all the pixels are the same value.
I would like a situation where I map a noise vector directly to an image. On top of that , the loss function will maximize the distance(KL) between the segments found using the quad-tree algorithm on the image.
This is involved because we have to alliteratively de-resolve slightly the image and find the new segments after that . Then to maximize the distance between all these segments at the same time. Perhaps with a bias towards the segments found at the highest resolution.
Another consideration is that this happens for each channel in the RGB channels.
Thankyou!
If this gets to a paper i will mention contributors :)
The quad-tree compression algorithm divides an image into ever increasingly small segments (squares) and stops in a particular region when all the pixels are the same value.
I would like a situation where I map a noise vector directly to an image. On top of that , the loss function will maximize the distance(KL) between the segments found using the quad-tree algorithm on the image.
This is involved because we have to alliteratively de-resolve slightly the image and find the new segments after that . Then to maximize the distance between all these segments at the same time. Perhaps with a bias towards the segments found at the highest resolution.
Another consideration is that this happens for each channel in the RGB channels.
Thankyou!
If this gets to a paper i will mention contributors :)