- #1
ineedmunchies
- 45
- 0
Ok so I'm just beginning a year long MSc project. It will, hopefully, eventually involve a method to turn an IP Core into a dataflow actor/node. This actor/node can then easily be included in a design environment that uses dataflow modelling to design digital hardware.
But first I need to get my head around a few concepts, and the initial stepping stone is dealing with bursty data, as the hardware cores will only run when there are enough "tokens" available at their inputs for them to operate correctly. So basically does anyone have any ideas how I could do this with hardware? Essentially tell the processing block not to operate, until there is sufficient tokens at each input. These tokens will not be arriving in a continuous stream. They will arrive when they are produced by other cores, which can be considered random for the mean time.
I was considering a counter on each input, the counter would count each "token" that joins the queue, and then reduce the count by however many "tokens" are "consumed" when the processing core "fires." However I felt that this may be wasteful and wondered if anyone else had any ideas? I can't seem to find anything about it from googling, and wouldn't know where to start looking in books. All of the stuff about handling bursty data seems to be from the software point of view.
But first I need to get my head around a few concepts, and the initial stepping stone is dealing with bursty data, as the hardware cores will only run when there are enough "tokens" available at their inputs for them to operate correctly. So basically does anyone have any ideas how I could do this with hardware? Essentially tell the processing block not to operate, until there is sufficient tokens at each input. These tokens will not be arriving in a continuous stream. They will arrive when they are produced by other cores, which can be considered random for the mean time.
I was considering a counter on each input, the counter would count each "token" that joins the queue, and then reduce the count by however many "tokens" are "consumed" when the processing core "fires." However I felt that this may be wasteful and wondered if anyone else had any ideas? I can't seem to find anything about it from googling, and wouldn't know where to start looking in books. All of the stuff about handling bursty data seems to be from the software point of view.