You are on page 1of 6

Summary of Coleman and Steele and how we might use what they present in our

case
CHPT1 summary:
Two main type of errors in experiment depending on their values (P.9):
Errors that stay constant during the experimental period (systematic or
bias )
Errors that vary with the experimental period (random)
The uncertainty that we are interested in and what we report is an estimate
region in which the actual error in the experimental result might lie.
Important concept is the standard uncertainty which is: an estimate of the
standard deviation of the parent population from which a particular elemental
error originates.(P.12)
Error sources that vary between measurements are random and they are all
included in the calculation of Sx. Error sources that do not vary between
measurements are not included in Sx. (P.12)
Standard uncertainties are either type A or B: (P.13)
Type A: has the symbol s and is evaluated based on statistical analysis from
series of measurements
Type B: has the symbol u and is evaluated by means other than statistical
analysis.
Expanded uncertainty has the upper case symbol U and is associated with
confidence level
Expansion from measurement uncertainty to experimental uncertainty (P.15)
Sometimes the uncertainty specification must correspond to more than an estimate of
the goodness with which we can measure something. This is true for cases in which the
quantity of interest has a variability unrelated to the errors inherent in the measurement
system.

Even if the operating condition is steady, there will be time variations in the
measured quantity that will appear as random errors. Also, the inability to reset
the system to the exact operating condition from test to test will cause test to test
variations and hence more data scattered.

Replication level (Very Important)(P.19)


In terms of uncertainty analysis replication levels are 3

Zeroth-order: happens when one test is collected (like one 640 points
collection) and detailed uncertainty analysis is done. The uncertainty
obtained from this results give rise directly to the errors and accuracy in
the measuring system. Hence only the measuring system errors is shown
in this analysis. This is the best uncertainty that we can get
First-order: this is when multiple test is compared (three tests at the same
probe angle). The uncertainty given by the three tests will include the
measuring system errors plus any other influencing factors that varies

between tests to tests. It is expected that the uncertainty at this level is


higher than the zeroth-order since it contains more error sources. Note
here about systematic uncertainty(mentioned in CHPT5): sometimes the
systematic uncertainty doesnt vary between zeroth and first order
replication.
Nth-order: takes varation in instrumentation systematic error (not well
understood)

Comparing zeroth and first order replication uncertainty seems to give us


indication about test to test varation.
CHPT2 summary:
Review end op P.38 and beginning of P.39 to see how the confidence intervals are
defined
A very important question is: how well the standard deviation of a measured
variable describes the random varation in the variable.(P.40)
The answer is that in order for the standard deviation to describe fully the
random behavior of the measured variable, the N taken measurements of the
variable should include all the random error sources. This means that if a fast
DAQ system is used, large number of sample can be gathered in short time. In
this short time, it might be possible that not all the random factors that influence
the measured variable be present at the time of collecting (due to long time
scale of that error source). Coleman and Steele suggest that the mean of one
collection should be regarded as one measurement of the variable.
Pooled standard deviation equating (Eq. 2.20) (P.44)
Review section 2-4 (P.47) for outlier rejection criterion
Systematic standard uncertainty can be from different parent distribution (such
as Gaussian, rectangular and triangular). Depending on the parent distribution,
proper value of the standard deviation should be used(P.52 and P.53).
Note that the standard systematic uncertainty has the same values regardless of
whether the measured variable is a single reading or a mean value. The averaging
process does not affect the systematic uncertainties(P.54).
The way to combine the systematic and random standard uncertainty is given in Eq. 2.25
(P.54)
To associate confidence interval level, the book recommends using the coverage factor
as Eq. 2.26 (P.54)
The value of the coverage factor depends on the error source parent distributions, but it
is argued that the central limit thermo permits the use of the t student value as the value
of the coverage factor. The degrees of freedom for the coverage factor can be obtained
using Eq. 2.28. please revise Appendix C, in which the authors show that it is very good
assumption and very applicable in many engineering applications that the large sample
approximation is used and no need to worry about E1. 2.28. so if 95% confidence interval
is needed, the coverage factor can be taken as 2. (P.54)

CHPT3 summary:

Error propagation (P.63):


When propagate error through a DRE, correlated error between several
measured variables most likely to be present. This hold true for both systematic
and random error, although random error are usually uncorrelated with different
variables but they can be due to unsteadiness in the measured process.
Error propagation of a result can be done using Eq. 3.13 instead of the usual
error propagation formula. The benefit of using Eq.3.13 is that any correlated
effect due to unsteadiness for instance is already captured in Eq. 3.13. it is
recommend that results from Eq. 3.12 and 3.13 be compared. Differences
between the two equations suggest correlated errors (P.64).
Section 3-1.5 suggests that X increment is not necessarily the uncertainty in
the variables as what Fig suggests. It seems also that it is a good check to
change the increment and see if the results of the derivatives converged or is
changing (P.70)
In using MCM for error propagation, correlated error effects is straightforward to
implement (P.74)
Be aware when using MCM in estimating the expanded uncertainty. after finishing the
simulation look at histogram and make sure that there is no skewness in the result,
otherwise the +- uncertainty wont be symmetrical and Appendix E need to be revised in
this case (P.78)

CHPT4 summary:
Point to remember when specifying thermocouple accuracy (P.107)
When an expression for the fractional uncertainty in a temperature (UT /T ) is
encountered (as in Examples 4.1, 4.2, and 4.3), the temperature T must be expressed in
absolute units of degrees Kelvin or Rankine and not in degrees Celsius or Fahrenheit.
Similarly, the pressure p in Up/p must be absolute, not gauge. However, an absolute
uncertainty in a quantity has the dimensions of a difference of two values of that
quantity, and for UT this means the units are Celsius degrees (which are equal to Kelvin
degrees) or Fahrenheit degrees (which are equal to Rankine degrees). Thus, if someone
says that he or she can measure a temperature of 27C with 1% relative uncertainty, UT
is 3C (which is 0.01 times 300 K) and not 0.27C.
Section 4-7.1 (P.113) shows why it is very hard to measured efficiency by means of
temperature measurements. Temperature derivatives in the efficiency equation are very
sensitive and steep and hence, high accurate thermocouple are needed for any good
estimate of the efficiency

CHPT5 summary:
Random errors:

In steady-state tests where the variations of the measured variables are not correlated,
(sr )TSM and (sr )direct should be about equal (within the approximations inherent in the
TSM equation). However, when the variations in different measured variables are not
independent of one another, the values of sr from the two equations can be significantly
different. A significant difference between the two estimates may be taken as an
indication of correlated variations among the measurements of the variables, and the
direct estimate should be considered the more correct since it implicitly includes the
correlation effects.(P.127)
Data sets for determining estimates of standard deviations of measured variables
or of experimental results should be acquired over a time period that is large relative to
the time scales of the factors that have a significant influence on the data and that
contribute to the random uncertainty. In some cases, this means random uncertainty
estimates must be determined from auxiliary tests that cover the appropriate time frame

(P. 128 and 129).


calculating and interpreting srbar (P.129)
Using high-speed data acquisition systems, one could take an increasingly large number
of measurements during short time making srbar goes to zero. How useful these
estimates would be from an engineering viewpoint would certainly be open to
question.

For a digital output, the minimum 95% random uncertainty in the reading resulting from
the readability (assuming no flicker in the indicated digits) should be taken as one-half of
the least digit in the output. Of course, the random uncertainty could be
significantly less than this value. When there is no
flicker in the output of a digital instrument (at a steady-state condition), the random
errors are essentially damped by the digitizing process. (P.130)
review the example in section 5-3.1 (P.130)
due to correlation between the two pressure stations, random error in the discharge
coefficient has correlation terms between the pressure. Not accounting for correlation in
the usual error propagation formula caused the error to be very high compared to the
direct method of calculating sr. see table 5.2 (P.132) for number comparison.
Review example in section 5-3.3 (P.135)
Depending on the sign of the partial derivative of the correlated term uncertainty, the
direct method of calculating sr can be either smaller or larger from sr determined from
TSM.
In the authors experience, in many timewise tests on complex engineering systems
(such as one of the rocket engine tests in Section 5-3.3 or determination of the drag on a
ship model during one run down a towing tank), the result of the test should be
considered a single data point since it is acquired over a time period that is short
compared to the variations that influence the system being tested. In
such situations sXbar and srbar have little engineering meaning or use, particularly if in the
short period data are taken at a high rate and the 1/N factor is used to drive sXbar and
srbar to arbitrarily small numbers.(P.138)
systematic errors:
In some cases, because of time or cost, the measurement system is not calibrated in its
test configuration. In these cases the systematic errors that are inherent in the
installation process must be included in the overall systematic uncertainty determination
(P.140).
(Maybe calibrating the Five-hole by applying the pressure to the actual probe instead of
the current method would eliminate the installation errors)

In the absence of calibration, the instrument systematic uncertainty should be obtained


from the manufacturers information. This accuracy specification will usually consider
such factors as gain, linearity, and zero errors, and it should be taken as a systematic
uncertainty when no other information is available. This uncertainty will be in addition to
installation uncertainties that may be present in the measurement. (P.140).
Depending on the information available, sometimes we estimate b X to be constant (i.e.,
not a function of X), and this value is often reported as a % of full scale. (P.142). (so
maybe what is given to us by the manufacturer are the systematic error of the
instrument??)
Systematic uncertainty in the A/D converter (P.142)
The 95% confidence uncertainty resulting from the digitizing process is usually taken to
be LSB (least significant bit). The resolution of an LSB is equal to the full-scale voltage
range divided by 2N, where N is the number of bits used in the A/D converter. The
systematic uncertainty associated with the digitizing process is then taken as LSB. This
systematic uncertainty will be in addition to the other biases inherent in the
instrumentation system (such as gain, zero, and
nonlinearity errors).
It is highly recommended that the DAQ system being used has full range voltage close to
what the DAQ system is being asked to read. This reduces the systematic error due to
digitizing. (P.142)
Correlated systematic error (P.145):
Obviously, the systematic errors in the variables measured with the same
transducer are not independent of one another. Another common example occurs
when different variables are measured using different transducers, all of which
have been calibrated against the same standard, a situation typical of the
electronically scanned pressure (ESP) measurement systems in wide use in aerospace
test facilities. In such a case, at least a part of the systematic error arising from the
calibration procedure will be the same for each transducer, and thus some of the
elemental error contributions in the measurements of the variables will be correlated.
Estimation of the covariance terms for the correlated systematic errors in Xi and in Xk is
given by Eq. 5.17 (P.145).
Review example 5-4.2.1 (P.147) it shows how to consider correlated systematic error in
uncertainty analysis.
that the effects of systematic uncertainties do not cancel out in comparative experiments
in which the result is the difference in the results of two tests. however, the systematic
uncertainty can be significantly less than the systematic uncertainty in the result of
either test a or b.(P.154)
Sections 5-5 and 5-6 have extremely useful examples and can be used as a guide.
Very Important and useful (P.165 and 166)
The utility of a first-order replication comparison in the debugging phase of an
experiment uses the following logic. If all the factors that influence the random error of
the measured variables and the result have been accounted for properly in determining
sr, then the random standard uncertainty obtained from the scatter in the
results at a given set point should be approximated by sr. Here sr is the random
uncertainty of the result determined from propagating the estimated random uncertainty
of the measured variables using the TSM or an sr directly calculated previously in similar
tests. If the random uncertainty in the results is greater than the anticipated
values, this indicates that there are factors influencing the random error of the

experiment that are not properly accounted for. This should be taken as an
indication that additional debugging of the experimental apparatus and/or experimental
procedures is needed.
Random order of test point settings rather than a sequential order is preferred. This is
done to reduce hysteresis effects. (P.173)

CHPT7 summary:
Different use of the regression model (P. 221 and 220):
1.
2.
3.
4.
5.

Some or all (Xi, Yi) data pairs from different experiments


All (Xi, Yi) data pairs from the same experiment
New X from same apparatus
New X from different apparatus
New X with no uncertainty

If the systematic standard uncertainties for the (Xi, Yi) data and the (Xi+1, Yi+1) data are
obtained from the same apparatus and thus share the same error sources, their
systematic errors will be correlated.(P.220)
P = mV + C
The uncertainty in Vnew includes the uncertainty in the calibration curve as well as the
uncertainty in the voltage measurement, Vnew. If the same voltmeter is used in the
experiment as was used in the calibration, the systematic error from the new voltage
measurement will be correlated with the systematic error of each Vi used in finding the
regression, and appropriate correlation terms are needed. (P.220)
We use same DAQ for both calibration and collecting actual data, so correlated
systematic error shows up.
Often this perfect intercept is Y = 0 at X = 0. This perfect value of the intercept should
never be forced on the data. (P.222)
Eqs. 7.27 and 7.28, represent the full uncertainty of the calibration process. It includes
errors in each standard pressure we use plus the errors in the voltage reading using the
Xducer and DAQ (both systematic and random) (Xducer has random uncertainties in the
output voltage and the DAQ system has small errors in reading the output voltage).
Correlation errors between the different measured pressures do exist (both systematic
and random) and correlation errors between different measured voltages do exist as well.
Note that the first summation terms in Eqs. 7.27 and 7.28 can be given by Eqs. 7.21 and
7.22 if we believe that the errors are the same for each data point used in the calibration.
(P.226).

You might also like