- #1
ADDA
- 67
- 2
Hi, I'm attempting to match prerecorded data from a guitar, to data recorded, as a user plays, as shown in the following video:
The red lines are the prerecorded data, and the cyan lines are the live data. In the above example, each string is plucked open, so if you look to the lower left first, the low E string, the above that the A string and so forth.
From this pdf (no pun intended) http://www.indiana.edu/~iulg/moss/hmmcalculations.pdf posted on wikipedia, I can infer that the Baum Welch process gives an estimate of matching two sequences. My initial thoughts, were to scale the data since it is in the [0.0-1.0) range, round to an integer and then set that as a state, yet I think that there is another way of doing this. Any suggestions? I'm using the jahmm library for now.
The red lines are the prerecorded data, and the cyan lines are the live data. In the above example, each string is plucked open, so if you look to the lower left first, the low E string, the above that the A string and so forth.
From this pdf (no pun intended) http://www.indiana.edu/~iulg/moss/hmmcalculations.pdf posted on wikipedia, I can infer that the Baum Welch process gives an estimate of matching two sequences. My initial thoughts, were to scale the data since it is in the [0.0-1.0) range, round to an integer and then set that as a state, yet I think that there is another way of doing this. Any suggestions? I'm using the jahmm library for now.