Restricted Boltzmann machine uniqueness

In summary, it is possible to get the same distribution by setting the biases and weights to different values.
  • #1
Jufa
101
15
I am dealing with restricted Boltzmann machines to model distributuins in my final degree project and some question has come to my mind.
A restricted Boltzmann machine with v visible binary neurons and h hidden neurons models a distribution in the following manner:

## f_i= e^{ \sum_k b[k] \sigma^i[k] + \sum_s \log(c[ s ] + e^{\sum_k w[ s ][ k ] b[ k ] })} ##
## Z = \sum_i f_i ##
## p_i = f_i/Z ##
Where b[ k ] and c[ s ] are, respectively, the k-th and the s-th bias of, again respectively, the visible and hidden layer.
w[ s ][k] is the component s, k of the weight matrix of the network.
"i" here refers to a certain binary vector with components ##\sigma^i[k]##.
My question is:
Given a certain restricted Boltzmann machine (i.e. a certain set of biases and weights) that models a certain distribution ##p_i##, is it possible to find another configuration (i.e. a different set of parameters and weights) such that it gives the same distribution?
Thanks in advance.
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
If some moderator can explain why my text is strikethrough I would appreciate it.
 
  • #3
We are attempting to fix the problem. I got rid of the strike through text but now the latex Mathjax isn’t rendering. We moved to a new version of forum software with a new post editor and perhaps it has injected or requires some special way to enter math expressions.
 
  • #5
I think I have reset the corrected original version. Please refresh your browsers.

Hint: Do not use [B],[U],[S],[I]. If you do not "cheat" as I did here, the interpreter takes it for the command tag BEGIN bold / underline / strikeout / italic.
 
  • Like
Likes Jufa and jedishrfu
  • #6
Many thanks!
 
  • Like
Likes jedishrfu
  • #7
Jufa said:
I am dealing with restricted Boltzmann machines to model distributuins in my final degree project and some question has come to my mind.
A restricted Boltzmann machine with v visible binary neurons and h hidden neurons models a distribution in the following manner:

## f_i= e^{ \sum_k b[k] \sigma^i[k] + \sum_s \log(c[ s ] + e^{\sum_k w[ s ][ k ] b[ k ] })} ##
## Z = \sum_i f_i ##
## p_i = f_i/Z ##
Where b[ k ] and c[ s ] are, respectively, the k-th and the s-th bias of, again respectively, the visible and hidden layer.
w[ s ][k] is the component s, k of the weight matrix of the network.
"i" here refers to a certain binary vector with components ##\sigma^i[k]##.
My question is:
Given a certain restricted Boltzmann machine (i.e. a certain set of biases and weights) that models a certain distribution ##p_i##, is it possible to find another configuration (i.e. a different set of parameters and weights) such that it gives the same distribution?
Thanks in advance.
I’m not very familiar with the topic, but trivially, if you set ##c[ s]=0##, then you get the same ##f_i## by exchanging the ##w[ s][k]## and the ##\sigma^i[k]##. So for instance,
$$f_{i,1}=\exp{\left(\sum_k{b[k]\sigma_1^i[k]}+\sum_s{ \log{e^{\sum_k{w_1[ s][k]b[k]}}}}\right)}$$
$$f_{i,2}=\exp{\left(\sum_k{b[k]\sigma_2^i[k]}+\sum_s{ \log{e^{\sum_k{w_2[ s][k]b[k]}}}}\right)}$$
Then let ##\sigma_1^i[k]=w_2[ s][k]## and ##\sigma_2^i[k]=w_1[ s][k]##.
That gives you the same probability distributions but I don’t know if that’s allowed with restricted Boltzmann machines.
 
Last edited:

FAQ: Restricted Boltzmann machine uniqueness

What is a Restricted Boltzmann machine (RBM)?

A Restricted Boltzmann machine is a type of artificial neural network that is used for unsupervised learning tasks, such as dimensionality reduction and feature learning. It is composed of visible and hidden units that are connected through weighted connections, and it uses a stochastic approach to learn the underlying patterns in a dataset.

How is the uniqueness of a RBM determined?

The uniqueness of a RBM is determined by the weights and biases of its visible and hidden units. These parameters are learned through a process called contrastive divergence, which aims to minimize the reconstruction error of the RBM. If the weights and biases are different, then the RBM is considered unique.

Why is the uniqueness of a RBM important?

The uniqueness of a RBM is important because it ensures that the model is able to learn and represent the underlying patterns in a dataset accurately. If the RBM is not unique, then it may not be able to capture all the relevant features and relationships in the data, leading to poor performance on tasks such as classification or generation.

Can two RBMs have the same weights and still be unique?

No, two RBMs cannot have the same weights and still be unique. The uniqueness of a RBM is determined by the combination of its weights and biases, and if these parameters are the same, then the RBMs are considered identical. However, two RBMs with the same weights and biases can still have different performance on tasks due to differences in their architecture or training process.

How can the uniqueness of a RBM be improved?

The uniqueness of a RBM can be improved by increasing the complexity of the model, such as adding more hidden units or layers. This allows the RBM to learn more complex patterns and relationships in the data, making it less likely to be duplicated by another RBM. Additionally, using different training methods or techniques, such as using a larger dataset or adjusting the learning rate, can also improve the uniqueness of a RBM.

Similar threads

Replies
1
Views
970
Replies
49
Views
10K
Replies
1
Views
13K
2
Replies
42
Views
8K
Replies
1
Views
1K
6
Replies
175
Views
22K
Back
Top