- #1
fog37
- 1,569
- 108
- TL;DR Summary
- Dictionary Learning and Bases
Hello Forum,
I am trying to get a grasp of the topic (new to me) of dictionary and dictionary learning. In general, we express a vector ##A## using a basis.
A basis is a complete set of vectors that we can use to expand any other vector as a linear combination of the basis vectors. For example, a signal ##x(t)## can be expanded in the Fourier basis, in the Dirac delta basis, etc. These are examples of orthogonal bases (any pair of the basis vectors are uncorrelated with each other). The same signal can also be expanded using wavelets (wavelet transform) or cosines (cosine transform), etc.
Now on to dictionaries: a dictionary is an overcomplete set of vectors. We can use the dictionary atoms to write an arbitrary vector as a weighted sum of the atoms. In general, we always seek bases because we can get the expansion coefficients via inner products.
What is the point of a dictionary where some of the vectors are correlated and the set is overcomplete?
Does the inner product still provides the coefficients?
Say we have vector ##A## in ##R2##. All the possible bases will have exactly two basis vectors. We use the basis ##B= (B1, B2)##. We can write $$A= b1 B1 + b2 B2$$
Say we have a dictionary ##D= (d1,d2,d3,d4)## which can also be used to express $$A= d1 D1 + d2 D2+ d3 D3+d4 D4$$
Is that correct?
What is dictionary "learning" then?
I am trying to get a grasp of the topic (new to me) of dictionary and dictionary learning. In general, we express a vector ##A## using a basis.
A basis is a complete set of vectors that we can use to expand any other vector as a linear combination of the basis vectors. For example, a signal ##x(t)## can be expanded in the Fourier basis, in the Dirac delta basis, etc. These are examples of orthogonal bases (any pair of the basis vectors are uncorrelated with each other). The same signal can also be expanded using wavelets (wavelet transform) or cosines (cosine transform), etc.
Now on to dictionaries: a dictionary is an overcomplete set of vectors. We can use the dictionary atoms to write an arbitrary vector as a weighted sum of the atoms. In general, we always seek bases because we can get the expansion coefficients via inner products.
What is the point of a dictionary where some of the vectors are correlated and the set is overcomplete?
Does the inner product still provides the coefficients?
Say we have vector ##A## in ##R2##. All the possible bases will have exactly two basis vectors. We use the basis ##B= (B1, B2)##. We can write $$A= b1 B1 + b2 B2$$
Say we have a dictionary ##D= (d1,d2,d3,d4)## which can also be used to express $$A= d1 D1 + d2 D2+ d3 D3+d4 D4$$
Is that correct?
What is dictionary "learning" then?