You are on page 1of 21

STIMULI TO THE REVISION PROCESS

Stimuli articles do not necessarily reflect the policies


of the USPC or the USP Council of Experts

An Evaluation of the Indifference Zone of the USP 905 Content Uniformity Test

Dennis Sandell, , Greg Larner, , James Bergum, , Walter Hauck, , Jeffrey Hofer, , William
Brown, , Horacio Pappa
a b

g h

a c

a d

a e

a f

ABSTRACT The indifference zone, in the test for Uniformity of Dosage Units 905, was
questioned in a recent Stimuli article (PF 37(1) [Jan.Feb. 2011]) in which the authors found
that the current test rewards batches with an off-target mean. In this Stimuli article, this
potential issue is reviewed. This article demonstrates that the monograph potency
requirement implicitly ensures that the potential issue cannot occur in practice, because an
off-target batch will fail potency. On the basis of the investigations performed, and
considering the difficulties involved in modifying the harmonized 905 test, we recommend
retaining the current test in the chapter as-is.
INTRODUCTION
The current method for confirming content uniformity in Uniformity of Dosage Units 905 (1)
continues to attract interest and inspires research and the possibility for improvement. The
extension of 905 to applications where large sample sizes (>30) are collected is currently in
debate, and several proposals on this subject have been published (25). Additionally, the
European Pharmacopoeia (Ph Eur) has published a chapter on this subject (6). USP has
established a Content Uniformity with Large Sample Sizes Expert Panel (CULSS EP) to
develop a new chapter on statistical analysis and acceptance criteria (Large N test) when
assessing content uniformity in sample sizes larger than those called for in the 905 test.
Another task of the CULSS EP is to address a characteristic of the current 905 that was
described in a Stimuli article in PF 37(1) [Jan.Feb. 2011] (7). That article demonstrated that
the indifference zone (IDZ; see below for a description of this element of the test) of 905
introduces a deviation that, for a given coverage, benefits batches with significantly off-target
mean. Shen and Tsong (7) describe this deviation as a bias, and we will use this term
throughout the remainder of this Stimuli article.
The purpose of this Stimuli article is to describe and put into perspective the possible bias
identified by Shen and Tsong (7), review the history behind the introduction of the IDZ, remind
the reader of the harmonization process that was a key element of the development of 905
as it stands today, and explore the consequences of removing the IDZ and/or modifying the
current 905 test. Resolving this issue is critical for the CULSS EP because clarifying the
Content Uniformity test in 905 is the starting point for developing a Large N test.
Another purpose of this article is to briefly discuss the difference between batch release
testing and compliance testing as described in 905. Unfortunately, there is considerable
confusion among industry and regulatory agencies regarding the role of USP tests for batch
release. USP has clarified that 905 is not intended for batch release and that it is the
sponsors responsibility to arrange batch release testing to provide reasonable assurance
that a released batch will comply with 905 whenever tested (8). The current plan for the

CULSS EP, following the planned publication of a chapter containing the Large N test,
includes development of a chapter on some approaches to perform batch release testing to
ensure compliance with 905.
THE ISSUE
The harmonized test in 905 is summarized below [assuming a manufacturing target (T) of
100, a limit (L1) for the acceptance value of 15.0, and a limit (L2) for individual doses of 25.0]:
Collect a sample of 30 dosage units from the batch.
Assay 10 units.
Express the individual results as percentage of the label claim.
Calculate the average (X) and standard deviation (s) of the 10 results.
Let M = 98.5 if X < 98.5, M = 101.5 if X > 101.5, and M = X otherwise (this is the IDZ).
Calculate the acceptance value (AV): AV = | MX | + 2.4s.
The sample complies if AV 15.0.
If AV > 15.0, assay the remaining 20 units.
Express the individual results as percentage of the label claim.
Calculate the average (X) and standard deviation (s) of the 30 results in the combined
sample.
Let M = 98.5 if X < 98.5, M = 101.5 if X > 101.5, and M = X otherwise.
Calculate AV = | MX | + 2.0s.
The sample complies if AV 15.0 and all results are within 75.0%125.0% of M.
The operating characteristics (OC) curve for the harmonized pharmacopeial test is shown in
Figure 1, assuming that the data follow a normal distribution. Figure 1 shows, for a selection
of batch means in the range of 86%100% label claim (LC), the probability to pass the test in
905 vs. coverage of the interval of 85%115% LC. This coverage is the true proportion of
dosage units in the batch with a content in the range of 85%115% LC.

Figure 1. Harmonized USP test vs. coverage of 85%115% LC.


Figure 1 illustrates the issue described by Shen and Tsong (7). As long as the batch mean is
in the range of 94%106% LC, the OC curves are similar (so the chance to pass is
essentially independent of batch mean), but outside this range the probability to pass (for a
given coverage) increases with an increasing mean deviation from the target. This bias
clearly appears to be an undesirable property of the test in 905.
Figure 1 shows that for a fixed coverage, the probability to pass increases with decreasing
batch mean. To keep the coverage constant when the mean deviation from the target
increases, the associated standard deviation must decrease. This is illustrated in Figure 2,
which shows four normal distributions that all have 91% coverage of 85%115% LC [a
coverage of 91% was chosen for the illustration because this was previously studied by Shen
and Tsong (7)]. The difference between the distributions is the mean; this has been set to
86%, 90%, 95%, and 100% LC in this graph. Consequently, to keep the coverage constant,
the standard deviation has been modified as appropriate.
Figure 2 introduces the question of whether equal coverage directly corresponds to equal
quality. If one requires that the probability to comply with the 905 test should be constant for
a given coverage, regardless of the batch mean, one has implicitly decided that the four
distributions in Figure 2 represent equal quality. But is this reasonable? One can argue that
the batch with a mean of 100% LC is better, because patients receiving medication from this
batch would more often receive the intended dose. It can also be argued that the batch with a
mean of 86% LC is better, because this is much more uniform, meaning that patients will
obtain a more precise dose. What is best is thus a matter of opinion on how best to balance
mean and standard deviation. For this reason, it is not at all obvious that the OC curves for
different batch means in the 905 test should be required to overlap; this would be a
reasonable requirement only if it was agreed that the four distributions in Figure 2 are equally
good.

Figure 2. Normal distributions with means of 86%, 90%, 95%, and 100% LC, all with 91%
coverage of 85%115% LC.
Another approach that puts the bias into perspective is illustrated in Figure 3. This figure
shows the same data as in Figure 1, but here the probability to pass the harmonized 905 is
displayed against batch standard deviation.

Figure 3. Harmonized USP test vs. batch standard deviation (% LC).


Figure 3 indicates that the issue with the bias observed in Figure 1 might not be a concern
because the bias effect is no longer seen. Also, extremely small standard deviations are
required to allow an off-target batch to comply with 905. Figure 3 shows, for example, that

if the batch mean is 86% LC, the standard deviation must be <1% LC for the probability to
pass to exceed 95%.
For completeness, Figure 4 shows the same data once more, now displaying batch mean on
the x-axis and showing the probability to pass the harmonized 905 for different standard
deviations (STD).

Figure 4. Harmonized USP test vs. batch mean (% LC).


A final approach to demonstrate the uniformity required is the following: for each mean in the
range of 86%100% LC, the standard deviations corresponding to 50%, 75%, 80%, 85%,
90%, 95%, 97.5%, 99%, and 99.9% probability to pass 905 were determined. The results
from this exercise are shown in Figure 5.

Figure 5. Standard deviation (% LC) corresponding to different probabilities to pass 905 vs.
batch mean (% LC).
Figure 5 illustrates the change in standard deviation required to increase the probability to
pass the uniformity test in 905 from 50% to 99.9%. Furthermore, Figure 5 demonstrates
that the change in standard deviation required varies with batch mean; when the batch mean
is on target, the standard deviation range is about 5.2%7.6% LC, whereas when off-target
the range is much narrower. For an extreme batch mean of 86% of target, Figure 5 illustrates
that a marginal change in standard deviation from 0.9% to 1.2% LC would reduce the
probability to pass from 99.9% to 50%.
In summary, this section has shown that although the harmonized 905 exhibits some sort of
bias, it appears to have minor practical influence in that batches that would benefit from the
bias require such a small standard deviation that they are unlikely to exist. Moreover, whether
the bias is indeed a bias is a matter of opinion, depending on how one prefers to balance an
on-target mean and low variability. This bias is reviewed below in further detail, along with
possible approaches to remove the questioned characteristics of 905.
HISTORY OF 905
This section provides a brief historical review of how the USP criterion for content uniformity
has developed over the years.
In USP 15 (from 1955, page 945) and USP 16 (from 1960, page 941), there was a section for
Weight Variation in Physical Tests, but this did not include content uniformity. The first USP
content uniformity chapter appeared in USP 17 (from 1965), although there were separate
headings under Physical Tests that included Content Uniformity (page 905) and Weight
Variation (page 926). All measurement of dosage units and criteria were expressed as a
percentage of the average of the tolerances called AT (i.e., potency specifications).
All versions of the Content Uniformity test from USP 17 to the most recent USP have

consisted of two stages. Stage 1 consisted of testing 10 dosage units. If all 10 results met the
acceptance criteria, then the sample passed the test for Content Uniformity. If the Stage 1
criteria were not met, then an additional 20 dosage units were tested for Stage 2. If the Stage
2 criteria were met, then the sample passes the Content Uniformity test; otherwise, it fails. In
Stage 1 of USP 17, if all results were between 85% and 115% of the AT, then the sample
passed the content uniformity test. If NMT 1 result fell outside 85%115% AT, then the test
proceeded to Stage 2; otherwise, it failed. At Stage 2, if NMT 1 result of all 30 fell outside
85%115% AT, then the sample passed the Content Uniformity test; otherwise, it failed.
These criteria were subsequently adjusted in USP 18 and 19, as follows.
In USP 18 (from 1970), a new requirement was added to the Content Uniformity test that no
result could fall outside 75%125% AT in either stage. Additionally, USP 18 introduced new
and different criteria for tablets and capsules. In Stage 1, if NMT 1 result fell outside 85%
115% AT for both tablets and capsules, then the sample passed the content uniformity test.
For the first 10 dosage units, if NMT 2 results for tablets or 3 results for capsules fell outside
85%115% AT, then the test proceeded to Stage 2; otherwise, it failed. In Stage 2, NMT 2
tablets from both Stages 1 and 2 could fall outside 85%115% AT to pass the Content
Uniformity test; whereas for capsules, NMT 3 capsules from both Stages 1 and 2 could fall
outside 85%115% AT to pass the content uniformity test.
In USP 19 (from 1975), the number of decimal places in the criteria was changed to the
tenths place.USP 19 also added a correction for Special Procedures in the content uniformity
requirement to adjust results based on the difference between the weight of the single
dosage unit when using the assay vs. using the special procedure, when different analytical
methods were used for assay and content uniformity.
In USP 20 (from 1980), there were separate general chapters for weight variation and content
uniformity: Content Uniformity 681 and Weight Variation 931. Chapter 905 first
appeared in USP 20NF 15, Addendum to the Third Supplement (official September 1,
1982). The chapter combined 681 and 931 and allowed uniformity of dosage units to be
demonstrated by either weight variation or content uniformity. Weight variation could be
applied if the product was a liquid-filled soft capsule or if the product contained 50 mg or
more of a single active ingredient comprising 50% or more, by weight, of the dosage-form
unit. The Addendum changed the expression of results to LC from AT, added a coefficient of
variation (CV = 100s/mean) criterion, and the number of decimal places in the allowable
ranges for individual results was changed to a whole number. The test and criteria from the
Addendum are provided in Table 1.
Table 1. Content Uniformity Test/Criteria before Harmonization
JP XIII Content Ph Eur Uniformity of
Number USP
20
Content Uniformity
Content of Single-Dose
Stage Tested Uniformity Test/Criteria Test/Criteria
Preparations, Test A
1
10
Express individual results Express individual Express individual results as
as a percentage of the results
as
aa
percentage
of
the
label
claim. percentage of the average.

JP XIII Content Ph Eur Uniformity of


Number USP
20
Content Uniformity
Content of Single-Dose
Stage Tested Uniformity Test/Criteria Test/Criteria
Preparations, Test A
Tablets: Pass if all results label
claim. Calculate the average (X) of
fall between 85%115% Calculate
the the 10 results. Express
LC and CV < 6.0%. average (X) and individual results as a
Go to Stage 2 if NMT 1 standard deviation percentage of the average.
result falls outside 85% (s) of the 10 results. Count the number of units
115% LC and no result is AV = | 100 X | + outside 85%115% of the
outside 75%125% LC. 2.2s.
average (n15) and the
Otherwise, the sample Pass if AV 15.0. number of units outside
fails content uniformity. If AV > 15.0, go to 75%125% of the average
Capsules: Pass if NMT 1 Stage 2.
(n25).
result falls outside 85%
Pass if n15 = 0 and n25 = 0.
115% LC and no result
If n25 = 0 and n15 =1, then
falls outside 75%125%
go to Stage 2.
LC and CV < 6.0%.
Go to Stage 2 if 2 or 3
results are outside 85%
115% LC, no result is
outside 75%125% LC,
or
CV
>
6.0%.
Otherwise, the sample
fails content uniformity.
Express individual results Express individual Express individual results as
as a percentage of the results
as
aa
percentage
of
the
label
claim.
percentage of the average.
Tablets: Pass if NMT 1 label
claim. Calculate the average (X) of
result is outside 85% Calculate
the the 30 results. Express
115% LC, no result is average (X) and individual results as a
outside 75%125% LC, standard deviation percentage of the average.
and CV < 7.8%.
(s) of the 30 results. Count the number of units
2
20
Otherwise, the sample AV = | 100 X | + outside 85%115% of the
fails content uniformity. 1.9s.
average and the number of
Capsules: Pass if NMT 3 Pass if AV 15.0 units outside 75%125% of
results are outside 85% and all results are the
average.
115% LC, no result is within 75%125% Pass if n15 1 and n25 = 0.
outside 75%125% LC, LC.
and CV < 7.8%.
Otherwise, the sample
fails Content Uniformity.
The Addendum also contained criteria for the situation where the average of the potency
specifications was 100% LC or less. The test is the same as shown in Table 1.
USP 20, Addendum to the Fourth Supplement (page 915) contained criteria for the situation
where the average of the potency specifications was >100% LC. In this case, if the average

value of the dosage units tested is greater than or equal to the average of the potency limits,
then LC in Table 1 is replaced by LC multiplied by the average of the limits specified in the
potency definition. If the average value of the dosage units tested is between 100% LC and
the average of the potency limits, then LC in Table 1 is replaced by the average of the
dosage units tested.
USP 20, Fifth Supplement (page 1114) updated 905 to clarify instances of potency
specifications >100.0%, by adding expressed as a percent of label claim to the average
value of the dosage units tested. A limit was added to permit the use of the Special
Procedure for the ratio of the calculated adjustment to the Special Procedure mean. If the
ratio is <0.030, then no adjustment is needed.
HARMONIZATION
The current harmonized uniformity of dosage units (UDU) test in 905 was developed in an
effort to harmonize UDU testing for content and mass/weight variation between the existing
905 UDU test (9) and the corresponding tests in the Japanese Pharmacopeia (JP) (10) and
Ph Eur (11). The USP, JP, and Ph Eur tests that were in effect at the time are described in
Table 1.
For the functional form of the test, the goal of the harmonization was to retain the JP
approach to use an AV. The choice of the values of k1, k2, and the IDZ for the mean has
been an area of much discussion and research. The history of the establishment of these
values is provided to dispel any confusion on this topic.
As mentioned above, the goal of the harmonized UDU test was to develop a test that was
similar to the USP and JP tests in effect at that time, when the true mean was equal to 100%,
and to also provide a test that was approximately a 50/50 compromise from an operating
characteristic curve perspective for means differing from 100. The values 96% LC and 92%
LC were examined because they were thought to be the maximum range of reasonable
means to consider given the typical assay specifications of 90.0%110.0% LC in the U.S.
Initially, the goal was to retain separate criteria for tablets and capsules, but this would inhibit
harmonization; therefore, the focus was on the previous USP tablet test and the JP test.
Because the JP test defined its own values of k1 and k2 and lacked an IDZ, an initial
examination showed that simply evaluating alternative values of k1 and k2 without an IDZ
would not achieve the desired goals. Therefore, the addition of an IDZ was considered. An
extensive matrix of possible values of k1, k2, and IDZ were evaluated to identify the
combination that achieved the best 50/50 compromise between the previous USP tablet test
and JP test. The initial proposal that best met the desired goals was k1 = 2.3, k2 = 2.0, and
IDZ = 1.5. After negotiation at an International Conference on Harmonisation (ICH) meeting
(Japan, 1999) where separate criteria for capsules (k1 = 2.4 and k2 = 1.9, and IDZ = 3.5)
were considered, the final parameters of the test were established as they are today: k1 =
2.4, k2 = 2.0, and IDZ = 1.5.
The following two graphs (Figures 6 and 7) show that the goal of the harmonization process
for content uniformity was achieved with the current USP test (shown as Harm UDU in
graphs).

Figure 6. Comparison of harmonized UDU (Harm UDU) test and regional tests in effect at the
time for harmonization (batch mean at 100% LC).

Figure 7. Comparison of harmonized UDU (Harm UDU) test and regional tests in effect at the
time for harmonization (batch mean at 95% LC).
Figures 6 and 7 show that the harmonized UDU test is almost always tighter than the Ph Eur
test A, is virtually identical to the old JP and USP tests when batch mean is on target, and is a
reasonable compromise between old JP and USP tests for off-target batch means.
It is noted that despite the appearance of being tolerance interval factors, the selection of the
values of k1 and k2 was not to achieve any stated confidence or coverage. As discussed, the
parameters of the test were derived to develop a test criteria that was a 50/50 compromise

between the old JP and USP tests. The introduction of the IDZ was a key element in
achieving this compromise. A removal of the IDZ will make the resulting test tighter, therefore
inhibiting the original goal of harmonization, unless some modification is introduced to
counter this unwanted shift. This issue will be studied in the next section.
INVESTIGATING POTENTIAL SOLUTIONS
Shen and Tsong (7) suggest in their Stimuli article that the IDZ be removed from 905 to
avoid the bias. In Figure 8, the OC curves for 905 with the IDZ removed are shown. As
seen in the graph, this approach does resolve the bias issue; the OC curves for means 96%
LC overlap, and there is no tendency for a large deviation from the target to be associated
with a greater chance to fulfill the testing requirements.

Figure 8. Operating characteristics of the harmonized USP test without IDZ vs. coverage of
85%115% LC for different batch means.
As described above, the current 905 was constructed to be a compromise between the old
905 and the JP content uniformity test in effect at that time. A natural question is whether
this compromise still exists after removal of the IDZ. To explore this, Figure 9 compares the
905 test with the IDZ (current USP, solid lines) to the same test but without the IDZ (dotted
lines). The two tests are compared for means of 100%, 96%, and 92% LC (the same means
that were used in the development of the harmonized USP test).

Figure 9. Operating characteristics of harmonized USP test with and without (w/o) IDZ vs.
coverage of 85115% LC for mean = 92%, 96%, and 100% LC.
Figure 9 clearly shows that for all batch means, removing the IDZ will lead to a significant
tightening of the current requirements. For example, if the true mean is at target, to have 90%
probability to fulfill requirements, the required coverage of 85%115% LC increases from
98.0% to 98.6%. This illustrates that if the IDZ is removed, something needs to be changed to
retain the same tightness as that of the current requirement; otherwise, the compromise
reached during the harmonization effort would no longer be maintained.
One alternative to avoid the claimed bias while keeping the IDZ would be to include some
additional requirement. Because the bias appears only when the true batch mean deviates
significantly from target, a natural addition would be to add a requirement on the sample
average. Indeed, there is already an implicit requirement on the average because the
potency is typically required to be within 90%110% LC. This means that in practice, the test
in 905 should be seen in the light of the need to also fulfill the potency requirement. To
review the effect of this combined requirement (905 and potency), Figure 10 shows the OC
curves for the current 905 when a requirement is added that the average be within 90%
110% LC.

Figure 10. Operating characteristics of the harmonized USP test with the additional
requirement that the average is within 90110% LC vs. coverage of 85%115% LC.
Figure 10 illustrates that that the potency requirement solves the majority of the bias issue
(there is still some minor bias for batch means in the range of 90%92% LC). This analysis
explains why the perceived bias has not been an issue in practice; the potency requirement
has
ensured
that
the
bias
has
never
played
a
role.
To fully remove the bias, the potency requirement on the average would need to be further
tightened. In Figure 11, the performance of the current 905 combined with a requirement
that the average is within 92.0%108.0% LC is shown. As seen in Figure 11, the additional
requirement completely removes the bias because the operating characteristics of the
modified USP test remain practically unchanged when the batch mean is in the range of
92%108% LC.

Figure 11. Operating characteristics of a harmonized USP test with an additional requirement
that the average is within 92108% LC vs. coverage of 85%115% LC.
To remove the bias while keeping the IDZ in the test, one option is thus to add a requirement
for the sample average. Another option is to remove the IDZ, but one must compensate for
this by changing the values of k1 and k2. In Figure 12, the effect of changing the value of k1
while keeping k2 = 2.0 is shown. In this graph, the mean is at 95% LC, because this is when
the 905 test is tightest, both with and without IDZ (compare Figures 1 and 7). The figure
shows the OC curve of the harmonized test in 905 (solid line) compared to those of the
same test without IDZ but with different values of k1 (dotted lines). This graph demonstrates
that it is not possible to change only the value k1 to achieve a match.

Figure 12. Operating characteristics of a harmonized USP (Harm USP) test compared to
905 without IDZ, k2 = 2.0, and different k1s vs. coverage of 85%115% LC (mean = 95%
LC).
In Figure 13, the option of changing the value of k2 while keeping k1 = 2.4 is explored in the
same manner as in Figure 12. This analysis shows that reducing the value of k2 to 1.7 results
in an almost perfect match of the test without IDZ to the harmonized USP test, when the
mean is at 95% LC.

Figure 13. Operating characteristics of the harmonized USP test compared to 905 without
IDZ, k1 = 2.4, and different k2s vs. coverage of 85%115% LC (mean = 95% LC).
Next is explored how this altered test (harmonized USP without IDZ and k2 = 1.7) with
different choices for the value of k1 compares to the current 905 when the mean is at
target; this is shown in Figure 14.

Figure 14. Operating characteristics of the harmonized USP (Harm USP) test compared to
905 without IDZ, k2 = 1.7, and different k1s vs. coverage of 85%115% LC (mean = 100%
LC).
Figure 14 clearly shows that the test without IDZ is more relaxed than the current test and
that no modification of k1 can solve this. This is a general finding; if we determine k1 or k2 for
an off-target mean so the test without IDZ matches the test with IDZ, then the test without IDZ
will be more relaxed for means that are close to target. Similarly, if we determine a value of
k1 or k2 for a match of OC curves when the mean is at target, the test without IDZ will be
tighter than the current test when the mean is off-target. As discussed above, this is the same
challenge that the developers of the harmonized Content Uniformity test faced (1214); their
solution was to introduce the IDZ and select this, together with the k1 and k2 values, to
achieve the compromise between the USP and JP tests.
There are, of course, other alternatives for adjusting the test without an IDZ so that its
outcome matches the current 905. One such alternative is to let the values of k1 and k2
depend on the observed sample average, or to consider another balance between the firstand second-stage sample sizes. Such alternatives suggest that, apart from adding an explicit
requirement on the sample average, there is no simple modification of the current 905 that
can balance the potential removal of the IDZ.
BATCH RELEASE VS. USP TESTING
Understanding the purpose of the test is a necessary precondition when evaluating the
current harmonized Content Uniformity test and any potential modifications. Unfortunately,
there is considerable confusion among industry and regulatory agencies regarding the role of
USP tests for batch release. USP is clear that the results of USP tests apply only to the
sample tested; no inference is intended. Specifically, USP General Notices Section 3.10 (8)
states, The similarity to statistical procedures may seem to suggest an intent to make

inference to some larger group of units, but in all cases, statements about whether the
compendial standard is met apply only to the units tested.USP tests are, therefore, not
intended to be informative about the properties of the batch the sample from which it
originated. This intention, however, does not conflict with the fact that a sample from the
batch is often evaluated against 905 at the time of batch release, because the sample
should comply with USP. Thus, failure of content uniformity or of any USP test at the time of
the intended batch release is a warning that needs to be investigated. Importantly, the
contrary is not the casecompliance of one sample from the batch with 905 is not typically
sufficient evidence to conclude that the batch properties are sufficient to control the risk of
future samples not complying with USP.
Another important aspect of USP tests is that any sample, if within its stated shelf life and if
stored properly, should meet all USP requirements for that product when tested. Although it is
not realistic to expect all possible samples of size 1030 (typical pharmacopeial sample
sizes) from a batch that may contain millions of units to meet the pharmacopeial
requirements, the probability of failing is a business risk to be controlled. Failure raises the
issue of adulteration or misbranding under the Federal Food, Drug, and Cosmetic Act.
The relationship of USP tests to batch release tests is now clear. It is the responsibility of the
manufacturer to determine how to assess a batch so that, if released, it has a sufficiently high
probability of passing the USP tests when tested, where sufficiently high is then a business
decision that must be made. For content uniformity, the manufacturer may accomplish this
with a content uniformity test that includes a sample size appropriate for this purpose (15,16),
or they may use process analytical technologies or other approaches without a specific
content uniformity test at release.
The use of OC curves to evaluate USP tests in this paper and by others is useful as
demonstrated above, although no inference is intended. The OC curves are from the
company perspective: for batches of given characteristics, what is the risk of samples from
those batches failing the USP test?
One consequence of this relationship is that the pharmacopeial test will have less stringent
acceptance criteria than a properly designed batch release test. Complaints that the
pharmacopeial test is not stringent enough may be due to falsely considering the USP test as
a batch release test. One of the authors, when working for USP, was often asked what is
learned from the batch, based on the pharmacopeial test, reflecting this confusion. It appears
that Shen and Tsong (7) also have confused pharmacopeial testing and batch release
testing.
They note that the USP test is incapable of inferring the characteristics of the population (the
lot). They then propose to replace the test in 905 with a significantly tighter test, such as a
parametric two-sided tolerance interval or two parametric one-sided tolerance intervals; use
of these tests as a pharmacopeial test is not reasonable for the reason described above.
However, using one of these tests, with sample size and tightness determined by the
manufacturer, as a batch release test to ensure that the batch complies with the test in 905,
whenever tested, could be a reasonable approach.

To assist companies faced with these content uniformity issues, the CULSS EP is planning
two additional general chapers. The first will present some methods and acceptance criteria
for USP testing of content uniformity in sample sizes larger than those specified in 905. The
second will address some approaches to setting batch release specifications to ensure a high
probability of meeting the 905 uniformity standard when tested.
CONCLUSIONS
As detailed above, the observation from Shen and Tsong (7) that due to the IDZ included in
USP 905, the operating characteristics of the test vary with sample mean and favors offtarget lots have been discussed extensively. This claimed bias is only apparent when the
probability to pass 905 is displayed vs. the coverage of the batch. After comparing the
characteristics of batches with different means but identical coverage, we learn that it is a
matter of opinion of which has the better quality. Should one favor good uniformity or being
on target, and how should these be balanced? One can conclude that the observed bias can
be either good or bad, depending on what is desired.
The harmonization process between the United States, European, and Japanese
Pharmacopeias aimed to establish common acceptance criteria for content uniformity testing.
This resulted in an effort to find a requirement of the JP type (a tolerance interval test) that
could provide a 50/50 compromise (regardless of true batch mean) between the JP and USP
content uniformity tests in effect at the time. This turned out to be a challenging exercise that
could only be solved by introducing an IDZ of appropriate size. As a consequence, it is
concluded that the IDZ cannot simply be removed without destroying the compromise
between the pharmacopeias; we believe that this is not warranted.
The current 905 has been used for several years without the claimed bias being an issue in
practice. As explained above, this is shown to be due to the potency requirement which
specifies that the mean should be within 90%110% LC; this implicitly ensures that
significantly off-target batches will not be released. It is, however, somewhat unattractive to
ensure content uniformity by two independent pharmacopeial requirements. Perhaps
consideration could be given toward adding a requirement on the sample average to the
current test in 905. It was shown above that 92%108% LC could be suitable, and that this
would not tighten the current requirements but remove any possibility that off-target material
could
be
released.
Finally, the option to remove the IDZ and instead modify the stage k1 and k2 values was
reviewed. This showed, as was found in the development work during the harmonization
process, that although it is simple to find new k values to match the OC curve of the current
test for a particular batch mean, the resulting test will be tighter or wider than the current
standard for other mean values.
In summary:

One can question if the claimed bias is indeed a bias. The same coverage does not
mean equal uniformity. Variability is directly linked to uniformity. What appears to be a
bias from a coverage perspective is merely the content uniformity test being true to
its character and favoring low variability/better uniformity.
Because of the potency requirement, the claimed bias is not a problem from a
practical point of view; there is no relevant risk that significantly off-target batches will
be released.
If desired, the addition of a 92.0%108.0% LC acceptance criterion for the sample
average to the test in 905 will explicitly ensure that the bias is removed.
Removal of the IDZ from the 905 test will result in a tighter requirement, and some
modification of the test will be required to retain its desired properties; this is not
simply a task of modifying the k values.
It is important to retain the harmonization of content uniformity testing between
pharmacopeias.
It does not appear that there are any significant reasons to modify the current
harmonized 905 test.
REFERENCES
1.USP. USP 33NF 28 (Re-issue). 905 Uniformity of dosage units . Rockville, MD:
United States Pharmacopeial Convention; 2010. p. R-86.
2.Sandell D, Vukovinsky K, Diener M, Hofer J, Pazdan J, Timmermans J. Development
of a content uniformity test suitable for large sample sizes. Drug Inform J.
2006;40:337344.
3.Diener M, Larner G, Pazdan J, Pfahler L, Strickland H, Vukovinsky KE, Anderson S.
Development of a content uniformity test suitable for sample sizes between 30 and
100. Thera Inno Reg Sci. 2009;43(3):287298.
4.Bergum, J, Vukovinsky KE. A proposed content uniformity test for large sample sizes.
Pharm Tech. 2010;34(11):7279.
5.Hu Y, LeBlond D. Assessment of large-sample unit-dose uniformity tests. Pharm Tech.
2011;35(10):8292.
6.EDQM. Chapter 2.9.47 Demonstration of uniformity of dosage units using large
sample sizes. In: European Pharmacopoeia 8.0 ed., Vol. 2. Strasbourg, France:
Directorate for the Quality of Medicines of the Council of Europe; 2014. p. 368.

http://online6.edqm.eu/ep800/NetisUtils/srvrutil_getdoc.aspx/0L3WoCJ8mE5moC3aq
DqKkQ7Hj/20947E.pdf
7.Shen M, Tsong Y. Bias of the USP harmonized test for dose content uniformity. Stimuli
to
the
Revision
Process.
Pharmacopeial
Forum.
2011;37(1).
http://www.usppf.com/pf/pub/index.html
8.USP. General notices and requirements 3.10. USP 38. In: USP 38NF 33. Rockville,
MD: United States Pharmacopeial Convention; 2014. p. 34.
9.USP. 905 Uniformity of dosage units. USP 28. In: USP 28NF 23. Rockville, MD:
United States Pharmacopeial Convention; 2005. p. 25052510.
10. JP. General tests. 10. Content uniformity test. In: The Japanese Pharmacopoeia. XIII
ed. Tokyo: The Ministry of Health, Labour, and Welfare; 1996. p. 25. [English
version].
11. EDQM. Chapter 2.9.6 Uniformity of content of single dose preparations. In: European
Pharmacopeia 7 ed., Vol. 1. Strasbourg, France: Directorate for the Quality of
Medicines of the Council of Europe; 2011. p. 296.
12. Hofer JD, Bergum J, Buchanan TL, Colonna A, Cooper D, Cowdery SB, et al.
Content uniformityevaluation of the USP pharmacopeial preview. Stimuli to the
Revision Process. Pharmacopeial Forum. 1998;24(5):70297044.
13. Hofer JD, Bergum J, Buchanan TL, Colonna A, Cooper D, Cowdery SB, et al.
Content uniformityalternative to the USP pharmacopeial preview. Stimuli to the
Revision Process. Pharmacopeial Forum. 1999;25(2):79397948.
14. Hofer JD, Bergum J, Buchanan TL, Colonna A, Cooper D, Cowdery SB, et al.
Recommendations for a globally harmonized uniformity of dosage units test. Stimuli
to the Revision Process. Pharmacopeial Forum. 1999;25(4):86098624.
15. Bergum JS. Constructing acceptance limits for multiple stage tests. Drug Dev Ind
Pharm. 1990;16(14):21532166.
16. Bergum JS, Li H. Acceptance limits for the new ICH USP 29 content-uniformity test.
Pharm Tech. Oct 2007. http://www.pharmtech.com/node/226015?rel=canonical
1 This article represents only the authors' opinions and not necessarily those of their
employers.
a USP Content Uniformity with Large Sample Sizes (CULSS) Expert Panel member.
b S5 Consulting.
c Pfizer.
d BergumSTATS, LLC.
e Sycamore Consulting LLC.

f Eli Lilly and Company.


g US Pharmacopeial Convention, Rockville, MD.
h To whom correspondence should be addressed: William E. Brown, Senior Scientific
Liaison, USP, 12601 Twinbrook Parkway, Rockville, MD 20852-1790. tel +1.301.816.8380;
email web@usp.org.

You might also like