Discrete Control System with Time Delay

In summary, the conversation discusses the development of a software control algorithm to compensate for oscillator imperfections and frequency drift. The algorithm uses a NTP server to estimate the "true" time and compare it to the system's time, and then updates the local RTC hardware accordingly. However, the use of a SW PID controller and filtering/averaging techniques can cause issues with over-compensation and oscillation. The conversation also mentions the importance of accurate measurements for successful control and references the NTP website for more information on algorithms and implementations.
  • #1
Number2Pencil
208
1
Hello,

I am trying to develop a software control algorithm to compensate for oscillator imperfections/frequency drift. I have a NTP server which I can get a pretty good estimation (@1Hz) of the "true" time and compare it to my system's time. I can differentiate the offset-error to calculate the time-drift (which is related to frequency error). I can then update my local real-time-clock (RTC) hardware to indicate the new frequency, the RTC will perform clock compensation with this new frequency. Once I can control the time-drift error to 0, my local clock matches very closely to the "true" time. If the oscillator drifts (due to temperature change or something), the control algorithm should detect it and compensate.

I've got this working fairly well with a SW PID controller, but the noise in the frequency-error is enough to cause the oscillation to swing around too much. I need to do some filtering/averaging to really get a good gauge of the true oscillator frequency, but doing this requires some delay. What I am noticing is that the PID controller will try to do a very small amount of change on the control signal, but due to the delay it will not see the effect of this for several samples. On the very next sample, the PID controller then tries to do additional change, still not seeing any difference. This gradual change starts to snowball. My control system starts to oscillate badly because it eventually tries to compensate for the previous over-compensation. It's actually worse than me not averaging at all...

I can reduce the effect by slowing down my controller dynamics, but I have to slow it down to a crawl to get it to be worthwhile. At that point the controller is awful and takes way too long to converge.

I was wondering if anyone has any suggestions for dealing with this?
 
Engineering news on Phys.org
  • #2
If you are trying to update the parameters at short intervals, you will need very accurate measurements of the errors.

A cheap digital watch will keep time to within a few seconds per day of absolute accuracy, and the change in the error from day to day is even smaller than that. So your clock system is probably already running at a consistent rate with errors of the order of better than 1 part in 10,000 (there are 86400 seconds in a day).

I have seen a proprietary version of Unix that had this type of clock synchronization across a cluster of computers, a long time ago. I don't think it used a clever controller at all. It just measured the clock errors about once an hour, and adjusted the clock rates to hit the correct time several hours into the future. After a few days to settle down, all the clocks stayed in sync to within about 1 second, which was near enough. It also ignored any time data that was way out of line, as probably being caused by a glitch in the network (e.g. data packets getting lost) not a genuine clock problem. That sometimes caused a problem when a new machine was added to the network and the time and date were not set accurately enough to "latch on" to the synchronization system, but that was easily fixed.

http://www.ntp.org/ has details of the algorithms they use (which are more complicated than the above, and use nonlnear control not a linear PID controller), as well as the code of a reference implementation.
 
Last edited:

FAQ: Discrete Control System with Time Delay

What is a discrete control system with time delay?

A discrete control system with time delay is a type of control system that uses a digital or discrete signal to control a system, with the added feature of a time delay. This means that there is a delay between input signals and the corresponding output, which can affect the overall performance of the system.

What are the main components of a discrete control system with time delay?

The main components of a discrete control system with time delay include the input signal, a processor or controller, a time delay element, and the output signal. The input signal is the desired control signal, which is processed by the controller and then sent to the system with a time delay.

What are the advantages of using a discrete control system with time delay?

One advantage of using a discrete control system with time delay is that it can be easily implemented and controlled using digital technology. It also allows for flexible and precise control of the system, as the time delay can be adjusted to optimize performance.

What are the potential drawbacks of a discrete control system with time delay?

One drawback of using a discrete control system with time delay is that the time delay can introduce instability into the system. This can be especially problematic in systems that require fast response times. Additionally, the time delay may also introduce errors or inaccuracies in the control signal.

How can the time delay in a discrete control system be minimized?

The time delay in a discrete control system can be minimized by using efficient processing methods and optimizing the system design. This may include using faster processors, reducing the number of processing steps, and minimizing the distance between the controller and the system. Additionally, implementing feedback control can also help to reduce the effects of time delay on the system's performance.

Back
Top