Applying Simpson's Rule to Data Historian Sampling

In summary, the conversation is about a person seeking help with using Simpson's rule to calculate total flow over a given period of time. Other methods, such as rectangular rule and least squares analysis, are suggested as possibly being more accurate and efficient for the data set. It is also mentioned that Simpson's rule may be more accurate for curved data rather than linear data.
  • #1
ppamco
1
0
Hi All
I am hoping someone can help me out with my problem.
I am sampling data from a flow meter every 10 seconds and storing it into a data historian. The readings are the instantaneous flow rate at the moment the data is sampled.
My problem is that I need to work out (based on the stored samples) the total flow over a given period (say 1 day). I have been advised that simpson's rule may offer a reasonable solution for integrating the area under the curve represented by the samples over the given time span.
Let me be frank, I am no good at math! I can find plenty of reference to simpson's rule on the web but I don't understand much of what I am reading.
The calculation engine supplied with the data historian supports vb script, external .exe's and its own scripting language (CalcScript).
I am hoping that some kind soul will take pity on me and help me out in understanding how to apply simpson's rule to my set of circumstances.
Any help is greatly appreciated.

ppamco
 
Computer science news on Phys.org
  • #2
I suppose this is too late to answer, wish i had seen this before. You don't need to use Simpson's rule to begin with. Just use rectangular rule. So if one measurement is say 9.3 gal/sec and you measure between 10 sec; 9.3gal/sec x 10sec=93 gallons, then move to next step. Next up approx is trapizoidal rule and simpson's and 3/8 simpson's then gaussian integration but rectangular rule will get there to start and maybe good enuff.
 
  • #3
I'd be curious to see how the data looks when plotted (as accurately as possible). If it's linear in most places, Simpson's rule will likely give the least accuracy compared to the improved trapezoidal or midpoint rule, however, if the data is curved (specifically quadratic or cubic) then Simpson's rule will give you an ideal result. I'm unsure about how Simpson's compares to the other Newton-Cotes approximation forumulas when it comes to nth-degree polynomials. I'd still be willing to bet that Simpson's would perform decently on polynomials greater than degree 4. We can take a look at the error bound formula for Simpson's rule and notice that the 4th derivative of any cubic polynomial will be zero, therefore, the error bound on any cubic using Simpson's rule (with no respect to iterations, only restriction they are even) will be zero, also.

Once you get some nice data samples, you could always do some least squares analysis, and figure out which polynomial looks the best. From there, you should be able to determine which Newton-Cotes formula gives the most accuracy with respect to your data.

Edit: Actually, I stand corrected. I don't know what I was thinking about Simpson's rule not being the most accurate for linear equations (fourth derivative is of course zero). I'll rephrase my above statements and say that other "easier and more efficient" methods may exists for your data set compared to Simpson's rule.
 
Last edited:

FAQ: Applying Simpson's Rule to Data Historian Sampling

What is Simpson's Rule and how is it applied to data historian sampling?

Simpson's Rule is a mathematical method used for approximating the area under a curve. It involves dividing the curve into smaller sections and using a quadratic equation to estimate the area in each section. To apply this rule to data historian sampling, the curve is replaced with a series of data points, and the quadratic equation is used to estimate the values between the data points. This allows for a more accurate representation of the data and can help identify trends or patterns.

Why is Simpson's Rule often preferred over other methods for data historian sampling?

Simpson's Rule is often preferred because it takes into account the curvature of the data, unlike other methods such as the trapezoidal rule which only considers the straight lines between data points. This makes it a more accurate method for estimating the area under a curve and can provide more reliable results.

How do you determine the number of sections to use when applying Simpson's Rule to data historian sampling?

The number of sections, or data points, used in Simpson's Rule is determined by the frequency of data collection. The more data points available, the more accurate the estimation will be. However, too many sections can lead to overfitting the data, so it is important to strike a balance between accuracy and simplicity.

Can Simpson's Rule be applied to non-uniformly spaced data points?

Yes, Simpson's Rule can be applied to non-uniformly spaced data points. However, it is important to note that the accuracy of the estimation may be affected. In these cases, it may be necessary to use other methods or techniques to account for the uneven spacing of data points.

What are the limitations of applying Simpson's Rule to data historian sampling?

One limitation of using Simpson's Rule is that it assumes a smooth curve between data points. If the data is highly irregular or contains significant outliers, the accuracy of the estimation may be compromised. Additionally, Simpson's Rule may not be suitable for all types of data, such as discrete or categorical data, and other methods may need to be used in these cases.

Similar threads

Replies
23
Views
2K
Replies
1
Views
5K
Replies
3
Views
1K
Replies
6
Views
1K
Replies
4
Views
2K
Back
Top