ML19295F300
| ML19295F300 | |
| Person / Time | |
|---|---|
| Issue date: | 11/20/1980 |
| From: | Dircks W NRC OFFICE OF THE EXECUTIVE DIRECTOR FOR OPERATIONS (EDO) |
| To: | |
| References | |
| FRN-52FR10033, TASK-IR, TASK-SE AA50-2-010, AA50-2-10, AA50-2-36, SECY-80-514, NUDOCS 8012120220 | |
| Download: ML19295F300 (43) | |
Text
',-
UNITED STATES NUCLEAR REGULATORY COMMISSION W ASHINGTON, D. C. 20555
!!ovencer 20, M00 SECY-80-514 s
INFORMATION REPORT m
o S
z For:
The Comissioners 7
51
- 4 n
re -
$a_
gg From:
William J. Dircks Executive Director for Operations fjg; n
ca 9
he
Subject:
REPORTONTHESTATISTICALTREATMENTOFINVENT0fyDIFFERENCESH z
g Purcose:
To transmit to the Commission a report on the "Possib8 Changes in the Statistical Treatment of Inventory Differences in Nuclear Material Accounting".
Discussion:
On March 31, 1980, the MPA staff briefed the Commission on the subject of two approaches to the treatment of inventory differ-ences (ID) in nuclear material accounting. As a result of the briefing, the Comission requested that MPA and NMSS prepare a paper on possible ways of changing the current treatment of ID to make it statistically valid.
The enclosed report (enclosure 1) was prepared in response to that request.
The report identifies and describes deficiencies in the statis-tical treatment of ID in three areas:
(1) ID evaluation; (2) determination of ID variability; and (3) ID definition and modeling.
For each of these areas, possible changes to the current regulations, procedures and practices are identified.
The issue of statistical treatment is complex and there is lack of technical agreement among staff members. A procedure was developed to solicit views from all NRC staff (enclosure 2).
Appendix 4 to the report sets forth concerns by some staff members ab'out the text of the report.
The report deals, as requested by the Commission, with possible ways of changing the current treatment of ID.
It identifies thirteen statistical deficiencies which, if changed, would make the treatment of ID statistically more valid.
However, this report does not evaluate the impact of changing the current system.
There already are on-going staff activities that are mentioned in the report which may change current practices.
The staff informs me that in continuation of this effort on review of accounting practices, NMSS intends to:
Contact:
T. S. Sherr, NMSS, a27 4004 L. R. Abramson, MPA, 4?2-7806 80 12 ",ec
.2 2 0
. 1.
Identify the extent to which the thirteen statistical deficiencies are currently under evaluation in on-going staff work.
2.
Evaluate the benefits that may result from instituting any or all of the possible changes listed in the report and the costs associated with developing and imple-menting these changes.
3.
As appropriate, schedule modifications to current practices.
I will inform the Commission of the plans for this continuing staff effort.
The Commission should be aware that the current MC&A upgrade effort will shift a portion of our interest from the periodic inventory to a more timely knowledge of the accounting for materials.
This principally will be done by more attention to material in processing streams. However, as presently formulated, the periodic inventory under the MC&A upgrade rule essentially would follow current practices.
Hence, the evaluation in item 2 above is important for consideration for the final MC&A upgrade rule.
/
\\/
/
-c'
-1s4., u..
William J'.
- Dircks, Executive Director for Operations
Enclosures:
1.
Statistical Treatment of ID 2.
Memo dated June 27, 1980 DISTRIBUTION Commissioners Commission Staff Offices Exec Dir for Operations ACRS Secretariat
POSSIBLE CHANGES IN THE STATISTICAL TREATMENT OF INVENTORY DIFFERENCES IN NUCLEAR MATERIAL ACCOUNTING TABLE OF CONTENTS Page I.
Introduction and Summary.........
1 II.
Background.
5 III. Identification of Deficiencies and Possible Changes.
7 A.
ID Evaluation.
7 8.
Determination of ID Variability..
10 C.
ID Definition and Modeling......
15 Appendix 1 - Staff Requirements Memorandum Dated April 4, 1980.
17 Appendix 2 - A Brief Chronology.......
18 Appendix 3 - Parameters and Statistics 19 Appendix 4 - Additional Staff Views..
20 References.
39
POSSIBLE CHANGES IN THE STATISTICAL TREATMENT OF INVENTORY DIFFERENCES IN NUCLEAR MATERIAL ACCOUNTING I.
Introduction and Summary This paper is in response to the Staff Requirements Memorandum, item (1),
from Samuel J. Chilk to William J. Dircks, dated April 4, 1980 [5] (see Appendix 1), that resulted from the MPA Briefing, Two Aoproaches to the Treatment of Inventory Differences in Nuclear Material Accounting, pre-sented to the Commission on March 31, 1980.
The Commission requested "that MPA and NMSS prepare a paper on possible ways of changing the current treatment of inventory differences to make it statistically valid...."
Accordingly, the purpose of this paper is to identify possible changes in the current treatment of inventory differences (ID) with the aim of placing this treatment on.a valid statistical basis with respect to terminology, theoretical principles, and operational practices.
In line with the Commission directive, this paper focuses on statistical deficiencies in the current treatment of inventory differences with the objective of identifying possible changes.
The general impact of the identified deficiencies basically relates to the inability of NRC ID evaluat'on to assure a high probability of alarming and initiating an investigation when a specified significant amount of material is lost or diverted.
While the concentration is on deficiencies associated with a single period inventory difference, many of these deficiencies also apply to more complex issues such as multiple period inventory differences.
However, these more complex issues are not specifically addressed in this paper.
Furthermore, the paper is not an evaluation of the effect of statistical deficiencies; nor is it an assessment of the effectiveness of nuclear material accounting in particular, or of safeguards in general.
The approach of the paper is to identify and describe the statistical 3roblems associated with treating inventory differences in three areas:
(1) ID evaluation, (2) determination of ID variability, and (3) ID defini-
- ion and modeling.
For each of these areas, statistical deficiencies are described, possible changes to the current regulations, procedures and aractices are identified, and staff activities which address the problem areas are briefly surveyed.
The statistical issues range from the simple to the extremely complex.
The simple issues can in principle be resolved quickly.
These include proper statistical terminology, a focus on statistical tests of hypotheses and the simpler aspects of good modeling of ID and the standard deviation of ID.
The more complex issues will require thorough study and review.
The possible changes identified in this report are not recommendations.
While implementation of the possible changes will make the treatment of inventory differences statistically more valid, all of the identified changes may not be warranted when considering their impact on safeguards effectiveness, inherent limitations and the resources required.
Continuing staff efforts will evaluate the benefits of the possible changes as well as the costs of developing and implementing the changes.
A summary of the statistical deficiencies discussed in the paper and possible changes to remedy them are as follows:
1
STATISTICAL DEFICIENCY POSSIBLE CHANGES A.
ID Evaluation 1.
The procedure for setting the 1.
a.
For ID evaluation, adopt current ID action limits reflects the framework and terminology some non statistical considerations of the statistical test of and is not based on the framework of hypotheses.
(p.8) the statistical test of hypotheses.
As a result, this procedure does not b.
Shift emphasis in the control the false alarm rate nor the regulations and guides probability of alarming and initiating pertaining to statistical
'an investigation when a specified tests from the false alarm rate amount of material is lost or diverted.
to the probability of alarming In addition, the current procedure and initiating an investigation does not address evaluation of when a specified anount of multiple period inventory material is lost or diverted.
differences. (p.7)
(p.9) c.
When feasible, apply the test of hypotheses to the evaluation of multiple period inventory differences.
(p.9) 2.
Current alarm criteria are 2.
Apoly statistical principles to based on non-statistical consid-the construction of the test of erations and are set indepen-hypotheses.
In particular, establish dently of the variability of ID.
alarm thresholds (or ID action (p.7) limits) based on ID variability.
(p.9) 3.
The current hypothesis testing 3.
Examine and develop approaches approach does not corsider the (for example, game theory) trade off between the false that provide a mechanism for alarm rate and the probability resolving the trade-off between of alarming and intiating a the false alarm rate and the investigation when a specified probability of alarming and loss or diversion has occurred.
intiating an investigation (p.7) when a specified loss or diversion has occurred.
(p.9)
B.
ID Variability 1.
The term " limit of error" has 1.
Adopt the standard deviation several inconsistent definitions.
as the measure of ID variability.
(p.10)
Eliminate the use of the term
" limit of error." (p.14) 2
STATISTICAL DEFICIENCY POSSIBLE CHANGES 2.
The regulations and guides fail to 2.
Modify the regulations and guides distinguish between the true value to define and distinguish among of a measure of variability, the true standard deviation, its formula for computing it (the estimator, and its estimate.
(p.14) estimator) and the numerical value obtained by substituting observed data into the estimator (the estimate).
(p.ll) 3.
Not all sources of error are reflected 3.
Improve guidelines to better in the calculation for LEID, nor other-model ID variability and to wise substantially addressed in any identify and characterize, to guidelines. (p.ll) the extent possible, on a facility-specific basis, all con-tributors to the variability of inventory differences, including non-measurement errors.
(p.14) 4.
Concepts and definitions of error 4.
In the development of future guid-which are in wide use in the ance for licensees, eliminate con-nuclear industry may be overly cents and definitions of error that restrictive and are not always are overly restrictive and confusing.
consistent with definitions in In particular, investigate various other disciplines; this could be approaches to the treatment of bias a source of confusion and could and its impact on ID variability and lead to inaccurate evaluation of clarify the notions of random and ID variability.
(p.12) systematic error.
(p.14) 5.
Licensee ID variability models 5.
a.
Assure that the model potentially contain weaknesses assumptions are appropriately with respect to model structure tested and that sensitivity and parameter estimation. (p.12) analyses to assess the con-sequences of wrong assump-tions in the modeling of the variability of random errors are performed.
(p.14) b.
Provide guidelines to control the magnitude and effect of modeling errors, including those stemming from non-measurement contributors.
(p.14) 6.
NRC does not routinely obtain 6.
Expand licensee reporting data and other information requirements and NRC review necessary to independently procedures to effectively verify all license LEID verify each licensee's determinations. (p.14) estimated standard deviation of ID.
(p.14) 3
STATISTICAL DEFICIENCY POSSIBLE CHANGES C.
ID Definition and Modelina 1.
Inventory difference is referenced 1.
Replace the term MUF by ID in in efficial regulations as MUF.
all regulations and official Lack of uniformity in terminology documents.
(p.16) is a potential source of mis-understanding.
(p.15) 2.
Regulations and guides fail to 2.
Modify regulations and guides distinguish between true ID and to be consistent with the ANSI observed ID.
This distinction is Standard N15.16 definitions of essential for complete understanding parameter, estimator and estimate and modeling, and for rigorous for true ID, ID estimator and statistical treatment of ID. (p.15)
ID estimate, respectively.
(p.16) 3.
The regulations and guides do not 3.
Improve ID modeling by more explicitly state the error terms explicit consideration of in the ID model and the model does true ID and identification not label random and fixed errors.
of random anc fixed e ror (p.15) components, and modify regulations and guides accordingly.
(p.16) 4.
As defined, ID may reflect impacts 4.
Modify regulations and guides of other than current period to incorporate statistically operations.
In particular, the acceptable procedures to correct regulations do not require the for bias, shipper-receiver adjustment of ID to eliminate differences, and waste and prior period contributions scrap operations.
(p.16) caused by shipper-receiver differences, waste and scrap operations, and possibly other operational factors. (p.15) 4
II.
Background
Inventory difference (ID) is the difference between the book or record inventory and the physical inventory.
At any given time, the book inven-tory of material in a plant area is determined by adding the quantities of material transferred into the area to the initial inventory of record and subtracting from it the quantities transferred out of the area.
Periodically, a physical inventory is taken to determine directly the total quantity of material on inventory.
If there were no measurement system errors, biases of unknown magnitude, human errors, unmeasured inventory, unmeasured losses nor diversion, the inventory difference would be zero.
If some or all of these factors are present, the observed inventory difference is not expected to be zero.*
The usefulness of ID for safeguards decision-making is a major issue.
Determination and evaluation of inventory differences have long played a role in safeguards nuclear material accounting.
The rationale for this role is briefly set forth in the following excerpts from NRC safeguards documents:
"To be effective, safeguards must be capable of providing timely and accurate information on the status of nuclear material and facilities."
(Memorandum Safeguards Objectives, [14]).
"The primary role of material accounting is to provide long-term assurance that material is present in assigned locations and in correct amounts. Through its measurements, records, and statistical analyses, material accounting should provide a loss detection capa-bility to complement the more timely detection capabilities provided by material control and physical protection.
Material accounting plays a primary safeguards role in the accurate assessment of losses or alleged losses."
(Report of the Material Control and Material Accounting Task Force, NUREG-0450, [21, p. 3, Executive Summary]).
" Assurance against undetected loss or diversion of special nuclear material can be achieved only by a measured physical inventory."
(Regulatory Guide 5.13, [15, p. 5.13-3]).
Since ID is a quantified measure (perhaps the only such measure available for safeguards decisions at this time), there has been a great deal of interest in its use, validity, and interpretation for decision-making purposes.
Over the years, inventory differences which have exceeded an alarm threshold have been interpreted, not always consistently, as indicators of possible loss or diversion, indicators of "out of control" facility operations, or simply indicators for further investigation.
Detection of loss or diversion is a major safeguards system objective.
Inventory difference information may or may not have a significant role
- Throughout this paper the term inventory difference, unless modified, means the observed inventory difference calculated from observed or measured components, which are subject to uncertainty.
5
in achieving this objective.
To the degree that inventory dif ference infor-mation can assist in the detection of diversion, either directly or indirectly, it contributes to this major objective.
Since the observed inventory difference contains error terms of a random nature, statistical techniques should be used in support of decision-making based on 10 evaluation.
The regulatory framework for determination and evaluation of 10 is established by several sections of 10 CFR Part 70.[6]:
o Section 70.51 defines " Material Balance" (including " material unaccounted for" (MUF), now called inventory difference) and also defines " limit of error" (LE).
It also specifies record keeping requirements and inventory intervals, and it prescribes limits on
" limits of error for.
(MUF)" as a function of plant additions to or removals from process.
o Section 70.53 requires certain material status reports to the NRC, including reports of physical inventories, inventory differences, and instances of and reasons for inventory differences exceeding their limit of error or the limit of error exceeding regulatory limits.
o Section 70.57 specifies a licensee measurement control program (including organizational requirements and management review).
o Section 70.58 outlines fundamental nuclear material controls and requires a material control plan from each licensee.
The controls require a facility material control and accounting organization, designation of material balance areas, material custodianship, evaluation of shipper-receiver differences, procedures for scrap control, and other operational controls.
This regulatory framework is the result of a long evolution starting even before the passage of the Atomic Energy Act of 1954.
Many regulatory procedures and activities stemmed from these regulations.
Refer to Appendix 2 for a brief chronology.
6
III. Identification of Deficiencies and Possible Changes A.
ID Evaluation 1.
Deficiencies in Statistical Foundations and Practices A purpose of ID evaluation is to make decisions concerning whether investi-gations are warranted in light of the possibility that the observed inventory difference could be the result of a loss or diversion of Special Nuclear Material (SNM).
ID evaluation involves the comparison of measured ID with several action limits.
For decision-making, the statistical test of hypotheses is a traditional approach.
The procedure for setting the current ID action limits, as specified in a 1974 policy letter [13], is not based on the framework of the statistical test of hypotheses and reflects some non-statistical considerations.
As a result, this procedure does not control the false alarm rate nor the probability of alarming and initiating an investigation when a specified amount of material is lost or diverted.
This statistical deficiency stems from the establishment of ID action limits proportional to plant throughput.
As an example, consider a plant with a throughout of 1000 kg during an inventory period.
The alarm threshold for plant shutdown would be set at 10 kg, independently of the variability of the ID.
A Material Control and Accounting Task Force recognized this deficiency and stated:
"The use of current MUF/LEMUF tests in the regulations as a basis for assurance that no significant quantity of material is missing is inappropriate.
The normalization of LEMUF to plant throughput for all types of plants results in very large limits for LEMUFs for high-throughput plants in terms of quantity of material, and small limits for LEMUFs for inventory-dominated plants.
In the former case the inferred probability of detection may be totally inade-quate, whereas in the latter case it may be an acceptable level.
In either case, a given level of detection is not assured."
[19, p. II-43]
As a related matter, tests of hypotheses pose a decision-making dilemma.
If a threshold is established at a level that is too low, there will be an excessive false alarm rate.
If the threshold is too high, the prob-ability will be too low for alarming and initiating an investigation when a specified amount of material is lost or diverted.
The decision-maker is implicitly forced to trade off conflicting objectives of a low false alarm rate and a high alarm probability when establishing the alarm threshold.
Although this discussion of deficiencies in the test of hypotheses is restricted to evaluation of single period inventories, these deficiencies also generally apply to multiple period ID and reinventory evaluation.
An example in this area is NRC evaluation of reinventory results.
7
It should be emphasized that no approach for evaluating ID data, including the hypothesis testing approach, by itself provides an unequivocal basis for a conclusion that no loss or diversion has occurred.
A critical principle of statistical hypothesis testing is that hypotheses can never be proven -they can only be rejected.
When statisticians use the phrase "the hypothesis is accepted", it does not mean literal or conclusive acceptance of the hypothesis.
It is only a shorthand statement that there is no compelling evidence for rejecting it.*
This is clearly understood and emphasized by statisticians.
For the treatment of ID, the null hypothesis would usually be stated as an assertion that the true ID is zero.
The test rejects this hypothesis if the hypothesis is incompatible with the evidence (i.e., if the observed ID is larger than the alarm threshold used in the statistical test).
If the evidence is not incompatible with the null hypothesis (i.e., the observed ID is below the threshold),
then this fact by itself is insufficient to support a conclusion that the true ID is zero, i.e., no loss or diversion has occurred.
The NRC has recognized this general problem in various NRC reports releas-ing ID data to the public.
These reports make the following statements:
"While an inventory difference larger than LEMUF may signal an abnormal situation requiring investigation, a small inventory dif-ference falling within its associated LEMUF is not automatic proof that no loss or theft of material has occurred.
Therefore, the NRC relies on evidence provided not only by the material accounting system but also by the internal control system, the physical secur-ity system, NRC inspections and evaluations, and NRC and licensee investigations."
[20, p. 6]
"Although an inventory difference larger than its overall measurement uncertainty (limit of error) may signal an abnormal situation requir-ing investigation, the fact that a small inventory difference falls wthin its associated limit of error--even an ID of zero-provides no automatic or conclusive proof that loss or theft of material has not occurred.
Therefore, the NRC relies on information provided not only by the material accounting system but also by the internal control system, the physical security system, NRC inspections and evaluations, and NRC and licensee investigations."
[24, Foreword]
2.
Possible Changes Possible changes to correct the ID evaluation deficiencies discussed above are as follows.
a.
For ID evaluation adopt the framework and terminology of the statis-tical test of hypotheses.
It should be emphasized that the applica-tion of such statistical tests might lead to investigations which detect a loss / diversion or, alternatively, a system anomaly, but
- " Strictly speaking, therefore, a significance test is a one-edged weapon which can be used for discrediting hypotheses but not for confirming them."
[4]
8
they cannot confirm that no loss / diversion or anomaly has taken place.
However, with an adequately specified model and if appro-priate data is available, one can estimate the probability of alarming and initiating an investigation when a specified amount of material is lost or diverted.
b.
Shift emphasis in the regulations and guides pertaining to statistical tests from the false alarm rate to the probability of alarming and initiating an investigation when a specified loss or diversion has occurred.
c.
Apply statistical principles to the construction of the test of hypotheses.
In particular, establish alarm thresholds (or ID action limits) based upon ID variability and the trade-off between the false alarm rate and the probability of alarming and initiating an investigation when a specified loss or diversion has occurred.
d.
Examine and develop approaches (for example, game theory) that provide a mechanism for resolving the trade-off between the false alarm rate and the probability of alarming and initiating an inves-tigation when a specified loss or diversion was occurred.
e.
When feasible, apply the test of hypotheses to the evaluation of multiple period inventory differences.
3.
Current Staff Activities A major staff effort is the development of a material control and accountability upgrade rule.
The upgrade rule project started over two years ago to implement the findings of the Material Control and Material Accounting Task Force, NUREG-0450 [21].
The upgrade rule "will try to cast material accounting and ID more realistically in a confirmation role and less in a detection role than the current regulations do."
[1, p.3]
The upgrade rule project is considering inclusion of provisions for periodic measured physical inventories of SNM and associated statistical hypothesis tests, with the purpose of confirming that the licensee's accounting records accurately reflect the licensee's material holdings, within the limits of measurement-induced uncertainty.
The upgrade rule effort generally deals with issues which are independent of the statistical deficiences identified in this section.
However, the upgrade rule will provide a framework within which statistical advances can be incorporated.
Other staff activities relating to ID evaluation include:
(i) development of improved material loss estimator methodology and statistical methods for nuclear material accountability; (ii) statistical analysis of cumulative ID, cumulative shipper-receiver differences and the bias correction problem; 9
(iii) revision of Regulatory Guide 5.33, " Statistical Evaluation of Material Unaccounted for," [18];
(iv) establishment of a statistical inference procedure for control units; (v) development of an MC&A Upgrade Rule Example System [26].
8.
Determination of 10 Variability 1.
Deficiencies in Statistical Foundations.
(a) Limit of Error The inventory difference is derived as a function of several components.
Some of these components are subject to random variation and, accordingly, ID is treated as a random variable.
The variability of random variables is characterized by measures of dispersion such as range, variance, and standard deviation.
The standard deviation is well established in statis-tical theory and practice and it is suitable for describing the variability of ID in safeguards applications.
he concept of limit of error (LE) of an estimator, denoted by LEID when applied to inventory difference, is peculiar to the nuclear industry.
LE has acquired several definitions and implied definitions in the last three decades; a survey of these definitions is given by Moore [11).
Two definitions of LE are of particular interest:
o By ANSI N15.16 Standard (1974):
"3.2 Definition.
The limit of error of an estimator T is twice the standard deviation of T; that is, twice the square root of the variance of T."
[3]
o By 10 CFR 70.51(a):
"(5) ' Limit of Error' means the uncertainty component used in constructing a 95 percent confidence interval associated with a quantity after any recognized bias has been eliminated or its effect accounted for."
[6, p. 443]
The ANSI definition is straightforward and defines LE of an estimator in terms of its standard deviation.
The CFR definition is not in accord with accepted statistical terminology and is imprecise and confusing.
For example, the meanings of " uncertainty component" and " recognized bias" are not clear and may lead to incorrect statistical applications.
The confusion in the oefinition of LE is reflected in the AEC Regulatory Guide 5.18:
"The new ANSI standard N15.16, however, defines limit of error as twice the standard deviation of the estimator.
This is not con-sistent with 10 CFR Part 70 and Regulatory Guide 5.3 since it does not always result in 95 percent confidence intervals...
"The concepts, principles, and referenced methods for calculating limits of error contained in the final draft of ANSI N15.16, ' Limit of Error Concepts and Principles of Calculation in Nuclear Materials 10
Control,'...are generally acceptable to the Regulatory staff for use in nuclear material control and accounting procedures, subject to the following:
"..The calculated limits of error defined in Section 3.2 of the standard should be based on 95% confidence intervals for the esti-mator, which must consider the effective degrees of freedom associated with the estimated variance."
[16]
A measure of the variability of ID is a necessary concept but the term
" limit of error" is superfluous and has different connotations for different people.
Even the concise ANSI definition does not offer a technical advantage by merely using a special name to denote twice the standard deviation of an estimator.
NRC regulations and guides do not explicitly consider the true value of a measure of variability (true LEID) and fail to distinguish between the true LEID and the observed LEID, where the true LEID is the correct quantity and the observed LEID is derived from inventory measurements.
A clear distinction between the true LEID, although unmeasurable, and the corresponding observed LEID is essential for correct and complete under-standing, and modeling, and for rigorous statistical treatment of LEID.
(See Appendix 3)
(b) Error Modeling - Completeness and Structure The error structure of ID is extremely complex.
Correct modeling is a virtual impossibility.
One difficulty is the complex correlation pattern of some of the errors.
There are correlations over time, correlations because of commonality of instruments, and environmental fluctuations.
Other difficulties relate to changes in instrumentation and processing procedures.
A reasonable modeling goal for material accounting is a workable approximation of error characteristics.
The current regulations require that the calculated ID be based only on measured quantities.
To the extent that not all the elements of a licensee's material balance are measured, the value of an ID is affected by unmeasured contributors as well as by sampling and measurement errors.
However, licensees are currently required to include in their LEID cal-culations only the uncertainty associated with the material they have measured.
Accordingly, not all the sourcss of error are reflected in the calculation for LEID.
The omitted contributors are sometimes called non-measurement errors.
These may include unmeasured side streams, certain process holdup errors and certain human errors.
Consequently, the measured variability of ID, be it standard deviation or limit of error of ID, is likely to underestimate the corresponding true measure of variability.
The use of an underestimated standard deviation in the applicaticn of a statistical test of hypotheses would generally result in a larger false alarm rate and incorrectly calculated probabilities of alarming and initiating an investigation when a specified loss or diver-sion has occurred.
11
It must be recognized that some sources of variability may never be prop-erly measured or modeled or are difficult to quantify.*
In addition, the complex measurement systems and processes have an extremely complex error structure which is difficult to model.
Thus, estimators of variability of ID will, to some extent, always be limited.
Because of these and other considerations, NRC regulations require measured material balances.
Never-theless, non-measurement contributors should be considered in performing ID evaluations, since fully measured material balances are not usually achieved.
The concepts and definitions of error as well as error propagation pro-cedures which are in wide use throughout the nuclear industry and the NRC follow the definitions, techniques, and models offered by Jaech [7].
Jaech's approach provides an operational methodology for handling data for measurement systems operating under variable conditions.
- However, Jaech treats bias only as a random variable, which has the effect of averaging its impact on ID variability.
In the evaluation of ID there are several approaches to treating bias.
The appropriate characterization of ID variability depeads upon the particular approach adopted.
In this perspective, the Jaech characterizatior of ID precludes some approaches to treating bias.
In particular, if an approach were adopted to treat bias in ID evaluation as a fixed (non random) quantity, then Jaech's methodology would have to be appropriately extended.
In addition, Jaech's formulation is not always consistent with prevailing statistical usage; this could be a source of confusion and could lead to inaccurate evaluation of ID variability.
In particular, Jaech defines
" random error", "short-term systematic error" and "long-term systematic error" as follows:
"An error that affects only a single member of a given data set is called a random error.
If the error affects some, but not all, members of the data set, it is called a short-ter:? systematic error.
If it affects all members of the data set it is callec a lonc-term systematic error or a bias."
[7, p. 81]
These definitions (vid treatment) of error are not always consistent with definitions in other disciplines [12].
Although Jaech does treat his errors as random variables, these definitions make no clear reference to underlying error populations or any concept of rancomness.
In addition, the terms " bias" and " systematic error" are generally considered synonymous in the statistical literature and industrial practice but not in Jaech's book.
Another problem area concerns weaknesses in the licensee ID variability models with respect to model structure and parameter estimation.
Several sources of error may be identified in the process of modeling ID variacil-ity.
When an error source is specified, the statistical characteristics of this error should also be specified as clearly as possible and their "In particular, it is generally difficult to identify all sources of non-measurement error, e.g., undetected human errors.
Further, even when some sources of non-measurement contributions can be identified, it may be too difficult to determine their impact on the variability of ID.
These difficulties stem from difficulties in quantifying the magnitudes of the errors and their probability distributions.
12
statistical validity considered.
Current practice, however, is to make certain distributional assumptions relative to the error, without adequate verification.
In particular, the normal distribution is assumed for victually every measurement error associated with the observed ID.
While regulatory literature (AEC Regulatory Guide 5.22 [17]) provides a tech-nique for testing the assumption of normality, this technique is only beginning to be employed under the 10 CFR 70.57 Measurement Control Review Criteria.
Other characteristics of error --- including mean, standard deviation, and skewness --- are frequently neither specified nor verified by an appropriate statistical test.
Sound statistical procedures should be used for error characterization and estimation.
Otherwise, results are not rigorously supportable and could be interpreted inappropriately.
2 Deficiencies in Statistical Practices Within thirty days subsequent to each physical inventory, the licensee transmits to its IE regional office limited data consisting of a material balance summary sheet containing the ID estimate, the calculated LEID, and the throughput for the period (defined as the maximum of additions to or removals from the process), information on bias and other adjust-ments, and other data.
These data are summarized in the " White Book."
The " White Book", published by the Office of Inspection and Enforcement (IE) in 1979 [22] reported 803 inventories for which LEID data was supplied.
Of these inventories, 375 instances were reported where ID exceeded LEID, whereas the nominal or intended definition of LEID would have predicted about 40 such instances.
The data in the " White Book" has limited utility for ID evaluation.
It contains adjustments from prior periods which distort the current results.*
An exercise comparing ID values with corresponding LEID values using current inventory data for 231 inventories yielded 77 cases of ID exceed-ing LEID, about 33%.
Even with better data, the rate at which ID exceeds LEID is still considerably higher than the 5% expected.
This experience is not unexpected inasmuch as LEID calculations fail to incorporate such contributors to ID variability as unmeasured side streams, unmeasured inventory and human errors.
In addition, licensees often have errors in their LEID methodology.
The Comprehensive Evaluation Program [2, 23] found numerous licensee errors that were not routinely uncovered by either the license review or as a result of IE inspection activities.
While NRC staff has worked to correct these errors, some errors might still exist.
Further, the components in the ID equation were not always treated on a uniform basis.
(For example, a comparison between two fuel fabrication facilities disclosed that liquid waste discards were accounted for differently.
In one facility, the
- The prior period contribution to ID can, in most cases, be identified; however, it cannot always be assigned to a specific previous inventory period.
As an example, scrap may be generated over several inventory periods and recovered several years later.
Accordingly, the assignment of this contribution to specific inventory periods is not possible.
This matter is discussed further in Section C, "ID Definition and Modeling."
13
discards after measurement were maintained on site in a lagoon and periodi-cally measured to adjust the ID of the facility.
In the other facility, the discards after measurement were removed from the facility with no further adjustments possible [27].) Also, sound statistical procedures have not been developed to correct ID for bias and shipper-receiver differences.
These factors limit the use of ID data for analysis.
NRC does not independently verify all licensee LEID calculations.
NRC review of methodology required to validate LEIDs as reported by licensees is incomplete.
In addition, NRC does not require licensees to routinely provide sufficiently detailed description of the methodology, statistical justification and analyses necessary for validation of licensee ID vari-ability models.
Submission of datailed operating procedures, details on measurement models, methods and data, and sufficiently detailed infor-mation on the measurement control program employed by each licensee under 10 CFR 70.57 are not routinely required by NRC [8, 27].
This detailed information would be necessary to both validate and verify the ID and LEID results.
3.
Possible Changes The deficiencies in the statistical foundations and practices relating to determination of ID variability could be eliminated or mitigated by the following changes:
a.
Adopt the standard deviation as the measure of ID variability.
Eliminate the use of the term " limit of error."
b.
Modify the regulations and guides to define and distinguish among true standard deviation, its estimator, and its estimate (the numerical value realized for the estimator).
c.
Improve guidelines to better model ID variability and to identify and characterize, to the extent possible, on a facility-specific basis all con'.ributors to the variability of inventory difference, including non-measurement errors.
(It is recognized that this is a complex and difficult undertaking.)
d.
In the development of future guidance for 1 censees, eliminate concepts and definitions of error that are iverly restrictive and confusing.
In particular, investigate variou, approaches to the treatment of bias and its impact on ID variability and clarify the notions of random and systematic error.
e.
Assure that the assumptions made in licensee ID variability models are appropriately tested and that sensitivity analyses are performed to assess the consequences of wrong assumptions in the modeling of the variability of random errors.
f.
Provide guidelines to control the magnitude and effect of modeling errors, including those stemming from non-measurement contributors.
g.
Expand licensee reporting requirements and NRC review procedures to effectively verify each licensee's estimated standard deviation of ID.
14
4.
Current Staff Activities There are several continuing activities concerned with treating inventory difference and its variability.
One activity is the application of the MCLAMS (Measurement Control, LEMUF, and MUF Simulation) model to nuclear fuel cycle facilities.* Another activity is the development of an advanced material accounting simulation model.
As stated earlier (Section III. A.3), another effort is the development of a material control and accountability upgrade rule.
The upgrade rule effort generally deals with issues which are independent of the statis-tical deficiences identified in this section.
However, the upgrade rule will provide a framework within which statistical advances can be incorporated.
Other activities are an evaluation of the sensitivity of the standard deviation of various loss estimators to specific losses and distri-butional assumptions and the development of investigative criteria for ID and for its standard deviation.
C.
ID Definition and Modeling In general, NRC regulations and guides do not explicitly consider true ID and fail to distinguish between the true ID and the observed ID.
The true ID is the correct quantity and the observed ID is calculated from measured components, which are subject to uncertainty.
(The definition of MUF (ID)** in 10 CFR 70.51(a)(6) seems to deal only with measured quantities.) A clear distinction between the true ID, although unmeasur-able, and the corresponding observed ID is essential for correct and complete understanding and modeling, and for rigorous statistical treat-ment of 10.
(See Appendix 3.)
Observed ID can be simply modeled as a sum of true ID and an error term, where the latter reflects contributions from various sources such as measurement error, sampling error, and other inventory error contri-butors.
Guidance is lacking in distinguishing among true ID, an ID estimator (formula for computing ID) and an ID estimate (numerical value obtained by substituting observed data into an estimator).
The guides do not explicitly state error terms, do not label random and fixed components, and make no reference to true ID.
(The context in which an error is to be dealt with may determine whether the error is to be regarded as random or as fixed.
Statisticians deal with this problem under the subject of components of variance or mixed model analysis of variance.) As defined, ID may reflect impacts of other than current period operations.
In
- MCLAMS is in the process of being phased out and replaced by an improved material accounting model called AMASS (Advanced Material Accounting System Simulation).
- The term ID, although in wide use by NRC, has never replaced the term MUF officially.
Lack of uniformity in terminology is a potential source of misunderstanding.
15
particular, the regulations do not require the adjustment of ID to eliminate prior period contributicns caused by shipper-receiver dif-ferences, waste and scrap operations, and possibly other operational factors.
(It is, however, current practice to have the licensee list these contributions.) Possible changes are:
a.
Replace the term MUF by ID in all regulations and official docu-ments.
b.
Modify regulations and guides to be consistent with the ANSI Standard N15.16 definitions of parameter, estimator, and estimate for true ID, ID estimator and ID estimate, respectively.
c.
Improve ID modeling by more explicit consideration of true ID and identification of random and fixed components, and modify regula-tions and guides accordingly.
d.
Modify regulations and guides to incorporate statistically accep-table procedures to correct for bias, shipper-receiver differences, and waste and scrap operations.
Current staff activities related to ID definition and modeling include the development of procedures to correct for bias and shipper-receiver differences, improve ID modeling, and evaluate the effect of departures from the usual assumptions of the ID model.
16
APPENDIX 1 STAFF REQUIREMENTS MEMORANDUM DATED APRIL 4, 1980 MEMORANDUM FOR:
William J. Oircks, Acting ED0 FROM:
Samuel J. Chilk, Secretary
SUBJECT:
STAFF REQUIREMENTS - BRIEFING ON TWO APPROACHES TO THE TREATMENT OF INVENTORY DIFFERENCES IN NUCLEAR MATERIAL ACCOUNTING, 10:00 A.M., MONDAY, MARCH 31, 1980, ROOM 550 EAST-WEST TOWERS, BETHESDA, MARYLAND (OPEN TO PUBLIC ATTENDANCE)
The Commission
- was briefed on the current treatment of inventory differences, and on a possible alternative treatment method.
The Commission requested:
1.
that MPA and NMSS prepare a paper on possible ways of changing the current treatment of inventory differences to make it statistically valid; (MPA/NMSS) (SECY Suspense:
4/30/80) 2.
that the Executive Director for Operations consider directing some FY80 resources to game theory study.
(ED0)
The Commisslon also requested:
3.
that the Secretary evaluate Conference Rooms in NRC occupied buildings in addition to H Street and East-West Towers for their suitability for Commission meetings.
(SECY) cc:
Chairman Ahearne Commissioner Gilinsky Commissioner Kennedy Commissioner Hendrie Commissioner Bradford Commission Staff Offices
" Commissioner Gilinsky was not in attendance.
17
APPENDIX 2 A BRIEF CHRONOLOGY From the years 1947 to 1954, ownership of Special Nuclear Material (SNM) belonged to either the Government or to Government-controlled research facili-ties at various universities.
With the advent of the Atomic Energy Act of 1954 specific licensing requirements provided for " possession, transfer, and use of SNM in the private sector." [20, p. 7] This act charged the Atomic Energy Commission (AEC) with the responsibility to implement a program for assuring against a loss or diversion of Special Nuclear Material (SNM) to any unauthorizv_ use.
During the period of 1954 to 1964 SNM was Government-owned and distributed to licensees' by the AEC, which generally depended upon a voluntary program of material protection and accountability by the licensees.
It was then believed that the high intrinsic financial value of SNM as well as the criminal penalties provided by the Atomic Energy Act for theft of SNM were sufficient incentives for good accountability [25, p. 2, Executive Summary].
In 1964 Public Law 88-489 permitted private ownership of SNM.
In 1966 the AEC proposed additional regulations for accountability under 10 CFR 70.
Amendments were issued in 1967 to codify material control and accounting regulations.
Certain records, fundamental controls, and periodic inventories (at least annually) were required for licensees' authorized to possess in excess of 5,000 grams of SNM in unsealed form.
In 1967, the Lumb Panel report [9] was written and recommended internal management control and establishment of criteria for evaluation of shipper-receiver differences, inventory differences, quantities of SNM discarded or lost, and possession limits on certain types of SNM.
In December 1967 the AEC published " Fundamental Material Safeguards Procedures" which included accountability requirements.
From 1968 through 1974 there was a stream of AEC changes to license conditions concerning material accounting.
10 CFR 70.51 (" Material Balance, Inventory, and Record Require-ments") was issued in effective form on November 6, 1973.
10 CFR 70.58 ("Funda-mental nuclear material controls") was added on October 24, 1974.
With the formation of the NRC in 1975 there has been an increasing concern to upgrade existing safeguard measures for SNM, specifically in the areas of physical security and establishment of licensees' measurement control programs.
On August 11, 1975 Section 70.57 (" Measurement control program for special nuclear material controls and accounting") was added.
During the years 1975 and 1976, the primary focus of the NRC was on improvements in physical security systems.
However, during that same period, the NRC established new regulations which required licensees' to " establish quality assurance programs based on national standards... review and revise their measurement systems to reflect any changes in process, and... establish a statistical control program for measurements." [20, p. 9] This essential structure has continued to this date.
18
APPENDIX 3 PARAMETERS AND STATISTICS Fundamental to any statistical modeling and estimation are the concepts of parameter and statistic.
The definitions of these terms, found in many elemen-tary texts, are written in the context of population and sample.
In measure-ment systems a population is the set of all possible measurements of an item.
Theoretically, the population consists of an infinite number of measurements.
A sample consists of one or more of these measurements.
The quotes below are taken from Mendenhall and Ott [10, p. 39].
" Numerical descriptive measures of a population are called parameters."
" Numerical descriptive measures computed from a sample are called statistics."
In material accounting the true ID is a parameter.
It is a quantity that would be obtained if the amount of material in the beginning inventory, ending inventory, and the Shipments and Receipts are known exactly and without any error whatsoever.
If this parameter is known exactly, then a positive true ID, no matter how minute, is indicative of loss or diversion.
In reality, this parameter is not known and has to be estimated by a statistic.
Statis-ticians often make a distinction between the concepts of estimator, which is the algebraic formula by which a statistic is computed, and estimate, which is the numerical value obtained by applying the estimator to the data.
In a simple model for inventory difference, the estimator of ID (a statistic) is written as the sum of true ID (a parameter) and an error.
This is analogous to terminology used in the field of communication; the signal corresponds to the parameter, noise corresponds to the error, and the perceived signal, called signal plus-noise, corresponds to the statistic.
A measure of variability of the estimator of ID is given by the standard deviation of 10.
For this measure the concepts of parameter and statistic are also applicable.
The true standard deviation of ID is a parameter, a quantity which is rarely, if ever, known.
It is estimated by a statistic which has a variability of its own.
19
APPENDIX 4 ADDITIONAL STAFF VIEWS This appendix contains additional staff views which were not reflected in the text of the report.
These include:
1.
Comments from S. Moglewer, M. Messinger, D. Lurie, and D. Rubinstein.
2.
Suggestions from Safeguards Standards Branch, SD.
3.
Outline of an overview of statistical problems in material accounting and safeguards, by David Rubinstein, Applied Statistics Branch, MPA.
20
Comments from S. Moglewer, M. Messinger, D. Lurie, and D. Rubinstein on Possible Chances in the Statistical Treatment of Inventory Differences in Nuclear Material Accounting.
We, the members of the original writing group, offer the following comments which we believe present a better perspective on some issues.
The issues are identified by direct quotes from the paper, including the page number where they are given.
Our comments are given below each quote.
P.1 " probability of alarming and initiating an investigation" The paper uses this phraseology throughout.
It is inappropriate since the phrase " initiating an investigation" incorporates assumpticas about the decisions based upon the statistical data and is properly outside the purview of this paper as stated in the Introduction and Summary on P.1.
Moreover, the phrase " initiating an investigation," unless more specifi-cally defined, is vague and possibly misleading as to the depth and thoroughness of investigation.
P.1 "The possible changes identified in this report are not recommendations."
While direction to staff in the memorandum from S. Chilk [5] is not explicit on recommendations, we feel that many of the issues have been extensively discussed and recommendations could be effectively formulated at this time.
P.2 "A. ID Evaluation 1.
The procedure for setting the current ID action limits reflects some non-statistical considerations and is not based on the framework of the statistical test of hypotheses."
This statement may be misleading.
The procedure for setting the current ID action limits is non-statistical.
It sets ID action limits based upon facility throughput, independently of any statistical or probabilistic considerations.
P. 2 "3.
The current hypothesis testing apprcach does not consider the trade-off between the false alarm rate and the probability of alarming and initiating an investigation when a specified loss or diversion has occurred."
This phraseology occurs in numerous parts of the paper.
Trade-offs are generally expressed between two positive alternatives or two negative alternatives.
Better phraseology is "the trade-off between the false alarm rate and tne probability of an undetected loss or diversion of a specified amount."
P.2 A major statistical deficiency that has not been incorporated into the items under ID Evaluation is the following:
4.
There have been incorrect interpretations of the statistical test of hypothesis as well as inconsistent and varying interpretations of inven-tory difference (ID) data.
(p.8 and Enclosure) 21
The Possible Chances would be:
4.
Provide guidance for interpretation of inventory difference (ID) data in a consistent manner and in accordance with statistical principles where appropriate.
For example, do not interpret non-statistical tests (such as those based upon current ID action limits) as tests of hypotheses.
P.3 "4.
Concepts and definitions of error which are in wide use in the nuclear industry may be overly restrictive and are not always consistent with definitions in other disciplines; this could be a source of confusion and could lead to inaccurate evaluation of ID variability."
The word "always" may be misleading.
The concepts and definitions of error which are in wide use in the nuclear industry are inconsistent with other disciplines, and they are imprecise, and therefore less restrictive.
P.3 "4.
In the development of future guidance for licensees, eliminate con-cepts and definitions of error that are overly restrictive and confusing.
In particular, investigate various approaches to the treatment of bias and its impact on ID variability and clarify the notions of random and systematic error.
(p.14)"
The above Possible Change does not correct for disparity with standard statistical practice.
We suggest the first sentence be replaced by:
"4.
In the development of future guidance for licensees, eliminate concepts and definitions of error that are confusing or that do not conform to standard statistical practice.
(p.14)"
Also, substitute this paragraph for item d. on page 14.
P. 3 Add the following Possible Chances to Item 2 as well as in the body of the paper:
Investigate methods for estimating the variability of LEID esti-mators, determining the effective degrees of freedom, and assessing LEID bias.
P.5 "Over the years, inventory differences which have exceeded an alarm threshold have been interpreted, not always consistently, as indicators of possible loss or diversion, indicators of "out of control" facility operations, or simply indicators for further investigation."
Add the following sentence:
On the other hand, when inventory dif-ferences have been below the threshold, the possibility of diversion has generally been discounted.
See also comment for quote from p.8.
P. 7 "ID evaluation involves the comparison of measured ID with several action limits.
For decision-making, the statistical test of hypotheses is a traditional approach."
To clarify the issue, we suggest the following sentences be added:
"However, NRC decision-making is not based upon the traditional approach.
This is because of the non-statistical nature of the alarm threshold as illustrated below."
22
P.7 "The procedure for setting the current ID action limits as specified in a 1974 policy letter [13], is not based on the framework of the statistical test of hypotheses and reflects some non-statistical considerations."
The procedure is non-statistical!
See comment for p.2 of paper.
P.8 "The NRC has recognized this general problem in various NRC reports releasing ID data to the public.
These reports make the following statements:
"While an inventory difference larger than LEMUF may signal an abnormal situation requiring investigation, a small inventory difference falling within its associated LEMUF is not automatic proof that no loss or theft of material has occurred.
Therefore, the NRC relies on evidence provided not only by the material accounting system but also by the internal control system, the physical security system, NRC inspections and evaluations, and NRC and licensee investigations." [20. p.6]
"Although an inventory difference larger than its overall measureaent uncertainty (limit of error) may signal an abnormal situation requiring investigation, the fact that a small inventory difference falls within its associated limit of error--even an ID of zero-provides no automatic or conclusive proof that loss or theft of material has not occurred.
Therefore, the NRC relies on information provided not only by the material accounting system but also by the internal control system, the physical security system, NRC inspections and evaluations, and NRC and licensee investigations.
[24, Foreword]"
The misinterpretation of ' acceptance' of the null hypothesis is a common pitfall.
NRC has not developed adequate guidelines for interpretation of small ids.
An illustration of the possible misinterpretation of small ids is given by the following quote:
'The reinventory resulted in a gain of - 4.339 kgs which when added to the July inventory difference resulted in an inventory difference of 7.568 which was within regulatory limits for measurement uncertainty...' [24, Vol. 1, No. 3, July 1979]
This quote is vague in its implication and may be construed to imply a conclusion of no diversion.
See the attached enclosure which provides a history of diverse interpretations given to inventory difference infor-mation [ Enclosure 1].
P.12 "The concepts and definitions of error....
In particular, Jaech defines
' random error', 'short-term systematic error' and 'long-term systematic error' as follows:"
The above paragraph does not emphasize that Jaech's approach is not stand-ard statistical practice, particularly for fixed effects.
We recommend that standard statirtical practice be used for any NRC approach for treat-ment of error modeling.
This means treating " bias" primarily as a fixed quantity and eliminating potentially confusing terms such as " systematic error." The established theory of components of variance should be con-sidered as an alternative to Jaech's approach.
P.12 "In addition, Jaech's formulation is not always consistent with pre-vailing statistical usage..."
23
A better statement is "Jaech's formulation is not consistent with prevailing statistical usage."
P.12 "These definitions (and treatment) of error are not always consistent with definitions in other disciplines [12]...."
It is suggested that the entire paragraph beginning with this sentence be replaced by the following paragraph:
Jaech's definitions (and treatment) or error are not consistent with definitions in other disciplines.
The definition of short-term system-atic error is subject to different interpretations.
The terms " bias" and " systematic error" are generally considered synonomous in the statistical literature and industrial practice but not in Jaech's book.
The following statement taken from Jaech is indicative of the confusion between bias and long-term systematic error.
"In this definition, no distinction is made between a long-term system-atic error and a bias because the quantities differ with respect to how they may be treated statistically but not with respect to their basic meanings.
(In a general sense we can also apeak of a short-term bias that affects some, but not all, the members of the data set.
This has the same relationship to a short-term systematic error as bias has to a long-term systematic error.)"
[7, p. 81]
P.13 "While NRC staff has worked to correct these errors, some errors might still exist."
The last part of the quote might be an understatement.
24
ENCLOSURE 1 NRC STATEMENTS CONCERNING DIVERSION BASED ON INVENTORY DIFFERENCE INFORMATION An abbreviated chronology of NRC statements concerning diversion is presented.
This brief history indicates diverse interpretations given to inventory dif-ference information and the evolution of thinking over time.
The diverse conclusions seem to relate to differing interpretations of " acceptance" of the null hypothesis, i.e., instances where the observed ID was less than the alarm threshold.
However, some of the complexities associated with ID evalua-tion appear to be gaining recognition through time.
Effectiveness of safeguards against loss or diversion of Special Nuclear Material (SNM) has been a concern of the Atomic Energy Commission (AEC)/the Nuclear Regulatory Commission (NRC).
The report on Strategic Special Nuclear Material Inventory Differences (NUREG-0350) was the first NRC report that attempted to make a general statement about theft and diversion.
The infor-mation presented in NUREG-0350 covered:
"The operation of major licensed nuclear fuel manufacturers and research laboratories processing significant quantities of SNM between January 1, 1968 and September 30, 1976."
[20, p. 1]
The report stated in the opening paragraph on page 2:
"The Nuclear Regulatory C"mmission has no evidence that any significant amount of strategic SNM ias ever been stolen or diverted." [20 p. 2]
This statement in conjunction with a report by OGS/0IA, " Inquiry into Testimony of the ED0" presented to the Commission on May 11, 1978 led to the Commission request, by the June 2, 1978 memorandum, that "The staff should develop,...., a general statement concerning conclusions that can be made about theft and diversion of significant amounts of Strategic SNM."
[A]
The decision for this request was based on four areas of concern.
"First the statement NRC has no evidence that any significant amount of strategic SNM has ever been stolen or diverted, on page 2 of the first inventory difference report, NUREG-0350, must be understood to apply only to the post-1968 period to which the report applies.
Second, with regard to the NUMEC matter itself, an appropriate characterization is that based on information available to the Commission at the present time, there is no conclusive evidence that a diversion of a significant amount of strategic SNM either did or did not take place.
Third, the Commission believes that unqualified 'no evidence' statements should be avoided in characterizing inventory difference matters, since even a zero industry difference does not conclusively demonstrate that material has not been diverted.
Qualified
'no evidence' statements should not imply a higher degree of confidence than the situation warrants.... Fourth, in dealing with the pre-1968 25
ENCLOSURE 1 (continued) safeguards data, staff statements should note that such data predate any regulatory staff activity and derive from a period in which safeguards measures were far less stringent thar. at present."
[A]
As a consequence of the Commission request the Office of Nuclear Material Safety and Safeguards (NMSS) issued, in the form of report SECY 78-632, on December 6, 1978, "NRC Statements on Conclusions Concerning Strategic Special Nuclear Material (SSNM) Theft and Diversion."
[B] The report attempted to characterize three cases of safeguards occurrences, and to make a general statement that the NRC could release to the public about each occurrence.
The three cases as given on page 2 of the report are:
"No information has been identified that would establish a basis to indicate that any significant quantity of SSNM has been stolen or diverted....
Although no information has been identified that would establish that any significant quantity of SSNM has been stolen or diverted, conditions were such that these acts cannot be ruled out as a possibility....
Information has been identified which indicates that a theft or diversion of SSNM may have occurred."
[B]
The suggested public statement by case are as follows:
".. It... is the staff judgment that the safeguards system in place at this facility (ies) has beea effective in preventing the theft or diversion of a significant quantity of strategic special nuclear material.
..Although the theft or diversion of SSNM has not been established, conditions were such that these acts cannot be ruled out as a possible cause of the Inventory Difference.
...The preliminary findings of the NRC evaluation have been provided to the FBI for review of possible criminal violations of the Atomic Energy Act.
Upon completion of the FBI's review, a report of the NRC review and evaluation will be available to the public."
[B]
In light of this report the Commission (unanimously) recommended that the staff reconsider and avoid public statements based on Case 1 which still, in the Commission's opinion, allowed for a "no loss" statement.
The Commission also directed the staff:
"...to develop a general statement describing our present position on the question of whether or not an) + heft or diversion of significant amounts of SSNM has cccurrtd I.,
t;,e past, and define the time frame to which this statement applies."
[C]
26
ENCLOSURE 1 (continued)
Aside from these recommendations the Commission approved the " conceptual approach contained in the paper."
The staff review of these requests, resulted in report SECY-79-345, submitted May 21, 1979.
In this report the staff urged the Commission to accept state-ment 1 because:
"...the point of this statement is that NRC can reach an overall judge-ment of 'No (real loss,' notwithstanding the presence of an Inventory Difference (be it an accounting loss or gain)."
[D]
In response to the general position on "whether or not any theft or diversion has occurred in the past," the staff proposed the following summarized statement.
"In Summary, based upon (1) all infor. nation supplied to NRC concerning safeguards and accounting at licensed facilities for activities prior to 1968, and (2) all information presently available to the NRC for safe-guards and accountability since the AEC/ Regulatory (now NRC) assumed that responsibility in 1975, the NRC has not identified any fact establishing that any significant quantity of SSNM has ever been stolen or diverted from a licensed facility.
However, the absence of comprehensive physical security systems to protect against theft or diversion prior to 1974, together with the presence of some large Inventory Differences reported prior to that time, result in the conclusion that a covert theft or diversion attempt could have been successful... it is the NRC's judgment that the overall safeguards programs which have been established at licensed facilities since 1974 have been effective in preventing the theft or diversion of any significant quantity of SSNM."
[0]
The Commission (with three Commissioners concurring) then approved the general statement concerning past occurrences of any theft or diversion [E].
After the NRF-Erwin August 1978 inventory the staff was requested to consider whether:
"The Commission requests the Staff to review the subject paper to see if the conclusions concerning theft or diversion of significant quantities of SSNM presented in the attached modification of the statement need to be altered in light of the latest Erwin MUF."
[E]
The staff reviewed the general statement and " concluded that it should be modified."
[F]
The latest proposed general statement of conclusions concerning past events of theft and diversion of significant quantities of Strategic Special Nuclear Material is pending Commission approval in the form of report SECY-80-104.
In this report the staff concludes:
27
ENCLOSURE 1 (continued)
"The investigation into the NFS-Erwin matter has shown that there is insufficient basis for the judgmental statement that the overall safe-guards programs which have been established at licensed facilities since 1974 have probably been effective in protecting against the theft or diversion of any significant quantity of SSNM.
With this entry removed, the general statement would appear to be more in line with our experience at NFS, i.e., 'although the NRC is not aware of any facts establishing the occurrence of such acts since 1974, the continued presence of some large Inventory Differences indicates that the possibility of theft or diversion cannot be conclusively ruled out."
[F]
28
ENCLOSURE REFERENCES A.
Chilk, Samuel J., NRC, memorandum to Lee V. Gossick, " Commission Review of OGC/01A Report ' Inquiry Into Testimony of the ED0'," June 2,1978.
B.
Dircks, William J., NRC, report to the Commissioners, Commissioner Action Item, SECY-78-632, "NRC Statements on Conclusions Concerning Strategic Special Nuclear Material (SSNM) Theft and Diversion," December 6, 1978.
C.
Chilk, Samuel J., NRC, memorandum to Lee V. Gossick, "SECY-78-632-NRC Statement on Conclusions Concerning Strategic Special Nuclear Material (SSNM) Theft and Diversion (Commissioner Action Item)," February 9, 1979.
D.
Dircks, William J., NRC, report to the Commissioners, Commissioner Action Item, SECY-78-345, " Conclusions Concerning Past Events of Theft or Diversion of Significant Quantities of Strategic Special Nuclear Material (SSNM),"
May 21, 1979.
E.
Chilk, Samuel J., NRC, memorandum to Lee V. Gossick, "SECY-79-345 -
' Conclusions Concerning Past Events of Theft or Diversion of Significant Quantities of Strategic Special Nuclear Material (SSNM) (Commissioner Action Item)'," October 19, 1979.
F.
Dircks, William J., NRC, report to the Commissioners, Commissioner Action Item, SECY-80-104, " Conclusions Concerning Past Events of Theft or Diversion of Significant Quantities of Strategic Special Nuclear Material (SSNM),"
February 20, 1980.
29
SUGGESTIONS FROM SAFEGUARDS STANDARDS BRANCH Terminology All the decision problems concerning nuclear material accountability are based on statistical inference.
The reason is that nuclear material accountability is based on measurements.
Correct terminology makes possible good communications about accountability decisions.
Correct terminology is crucial to the licensee statistician who has to meet the 10 CFR requirements.
The same is true for the I&E inspector.
Thus, all accountability requirements must be stated in correct terminology.
The following terminology is strongly suggested for use in all requirements and decision statements concerning nuclear material accountability.
1.
Parameter A constant which is associated with or is used to characterize a distribution or density function.
Note:
The true value of the constant is almost always unknown.
Examples of parameters in nuclear material accounting are true ID or true variance of ID.
2.
Estimator A function of a sample (X, X,..., X ) used to estimate 1
2 n
a population parameter.
Note:
An estimator can be a function which, when evaluated, results in a single value or results in an interval or region of values.
In the former case the estimator is called a point estimator, and in the latter case it is called an interval estimator.
Examples of estimators in nuclear material accounting are an estimator of the true ID or an estimator of the true variance of ID.
3.
Estimate A particular value or values realized by applying an estimator to a particular realization of a sample, i.e.,
to a particular set of sample vclues (x1, x2, Xn)'
Note:
Examples of estimates in nuclear material accounting are the numerical values obtained from using the estimator of ID or the estimator of variance of ID.
4.
Bias (a) The deviation of the expected value of a random variable frcm a corresponding correct value.
(b) A fixed error which remains constant over replicated measurements.
Note:
A measurement, as a random variable, is called unbiased, if it has zero bias, i.e.,
if the expected value of the measurement is equal to the correct value of the property being measured.
The term " systematic error" is not recommended.
30
Models It is fundamental that development of models precede the development of estimators or the development of test statistics (hypothesis testing).
The models that need to be developed for nuclear material accountability are first, models for the fundamental measurements used in accountability, e.g.,
determining the amount (grams) of SNM in UO, a second, a model for the observed ID which must use the models for t$e fundamental measurements.
Valid procedures should be derived for computing the expected value of the estimator ID and the variance of the estimator ID after the appropriate models are developed.
These models are used to obtain numerical estimates of ID and Var (ID) after their component parts are estimated.
Hyoothesis Testing Hypothesis testing is a useful tool for making decisions in spite of inevitable uncertainties in measurements and of not knowing the true value of quantity ID.
Hypothesis testing can be devised to detect a loss of a goal cuantity with a hich crobability and to minimize the false alarm probability.
Recall that there are two types of errors possible in the decision process.
Claiming that a loss occurred when no loss has occurred (the " null hypothesis")
is called a Type I error; the probability of a Type I error is a and can be called the false alarm rate.
Not detecting the loss when a loss has occurred (the " alternative hypothesis") is called a Type II error; the probability of a Type II error is b.
The " power of the test" is the probability of detecting the loss and is equal to 1-b.
The current test of "is ID greater than LEID?"
fixes the theoretical false alarm rate at approximately 5% without regard to the power of the test.
The NRC's concern should be for the power of the test.
As an example, let the coal cuantity be 5 Fkg. and the required oower be 90%.
Then the hypothesis test should be structured so that an investigation is initiated when the estimated ID exceeds the value 5-(1.28)s, where 5 is the coal cuantity, 1.28 follows from the 90% power requirement, and s is the estimate of the standard deviation of the estimator of ID.
Consider two normal distribution curves, one centered at zero Fkg. (the null hypothesis) and the other centered at 5 Fkg. (the alternative hypothesis).
The critical value of the hypothesis test (sometimes called the " alarm threshold" or
" detection threshold") is at 5-(1.28)s, since the area under the alternative curve to the left of this value is 10%(b).
For a different value of b, 1.28 will be a different number and can be calculated.
There are at least two advantages to using this method of selecting the critical value of the hypothesis test, instead of using 2s (LEID).
First, if a loss of 5 Fkg. has occurred, then the hypothesis test would have detected it with a probability of 90%.
For any larger loss, the probability is greater
~
than 90% of detecting that loss.
Using this method, the NRC can set the power of detection and a coal quantity.
Second, licensees are " rewarded" by having a low false alarm rate (a) due to a good or improved measurement system (as is reflected in having a low value for the estimate of the standard deviation of the estimator of ID).
This is because the false alarm rate is the area under the null distribution curve to the right of the critical value.
31
Nonmeasurement Errors The topic of "nonmeasurement errors" has the potential to cause many problems for the licensees and the NRC.
Preparation is the key to avoiding these problems.
Regulatory Guides and N'JREG Reports should be prepared which answer the following questions before any license conditions or regulations are written concerning nonmeasurement errors.
1.
What is the definition of a nonmeasurement error?
2.
What is the magnitude of the effect of nonmeasurement errors?
3.
How are the specific errors identified?
4.
How are nonmeasurement errors " included" in the Var (ID)?
5.
What is the probability distribution of the specific nonmeasurement errors?
6.
Are nonmeasurement errors already " included" in the estimate of the Var (ID)?
7.
What are the alterntives to eliminate nonmeasurement errors?
8.
When are nonmeasurement errors eliminated and when are they " included"?
9.
Why should a licensee be rewarded for poor performance?
Solution For Transcription Errors A potential solution to transcription errors (as a part of nonmeasurement errors?) is described as follows.
The solution is applicable to:
tamper indicating seal numbers, container identification numbers, sample numbers, and accountability measurement numbers.
A short subroutine (computer or calculator) is used to apply an alphabetic code to each number at the time of the first recording of that number.
The code is derived by dividing the number by 26 and noting the remainder.
Some examples follow.
If the seal number is 2626, then the remainder is 0.
Apply a code of A, i.e.,
2626A.
If the container number is 2627, then the remainder is 1.
Apply a code of B, i.e., 26278.
If the measurement is 26.28, then use 2628 and the remainder is 2.
Apply a code of C, i.e., 26.28C.
The code is used at each subsequent recording of the number.
For example, if the measurement example number is incorrectly recorded as 26.82C (not 26.28C), then the subroutine detects an error since 2682 yields a remainder of 4 and a code of E.
Clearly E is different from C.
The alphabetic code is useful since any remainders must be between 0 and 25, which correspond to the 26 letters of the alphabet.
32
OUTLINE OF AN OVERVIEW 0F STATISTICAL PROBLEMS IN MATERIAL ACCOUNTING AND SAFEGUARDS By David Rubinstein October 22, 1980 Preface At the direction of the Commission [ Commission Meeting March 31, 1980 and Memorandum to W. J. Dircks from S. J. Chilk, April 4,1980] MPA and NMSS are preparing a Commission Paper suggesting possible corrections to deficiencies in current statistical procedures applied to inventory differences (ID).
The Commission's concerns seemed to relate primarily to the imprecise and convoluted statistical definitions of LEID and LEID limits and their uses.
The directives seemed to call for corrections amenable to simple implementation rather than for a fundamental review of all statistical problems relating to ID or to thorough examination of safeguards philosophy and efficacy.
Herein an attempt will be made to place the narrower statistical problems into the framework of the broader safeguards problems particularly in relation to the detection of diversions from analysis of inventory differences.
Because of time limitations and because of the magnitude of the subject matter, this overview has to be sketc.hy.
The outline represents a personal point of view by a mathematical statistician, who at best is a distant observer of the safeguards scene.
Ideas of others are incorporated, even if not expressly credited.
No claim is made for comprehensive, careful research, or careful phrasing.
Background
The past NRC regulatory policy and implementation with respect to inventory differences have varied over time as well as from installation to installation.
There is in fact controversy over what they should be.
The predominant con-ceptual view of ID and ID regulation is statistical; observed or measured ID is regarded as a random variable.
Despite confusing terminology and varied implementation, the common features and basic thrust of implementation are best characterized by a statistical test of the null hypothesis that the true ID is equal to zero.
Rejection of the null hypothesis is taken as an indicator that something is wrong possibly a diversion.
Often the principle is enunciated that the statistical test is designed with a false alarm rate of approximately
.05.
In practice there have been considerable deviations from this principle in particular and from rigorous implementation of statistical tests in general.
Problems with ID and Possible Corrective Actions The NRC problems with ID range in profundity from proper terminology to basic philosophy of the role of ID in safeguards.
Some of the problems can be solved easily and others are difficult if not impossible to solve in a satisfactory fashion.
An ideal solution of these problems calls for:
a)
Defining the guiding philosophy and objective - obviously the
~
protection of public health and safety.
33 h
b) translating these into more concrete terms; this in turn can start out in fairly general terms which are developed into some sort of specifications.
c) i
. technical implementation of these specifications resulting in specific procedures.
d)
A review of the technical procedures under (c) for coherence to (a) and (b).
e)
Possible modification of (a), (b), and (c) as a result of (d).
Obviously, fulfillment of the outlined steps is a long range project.
Even though the present efforts at correcting deficiencies are limited, it may be useful to see these efforts in the context of broader safeguards objectives.
The problems enumerated and discussed below are ordered in a rough hierarchy from the simple to the difficult problems (almost in opposite order of (a),
(b) and (c) above).
They are taken up under the following headings:
1)
Terminology.
2)
Test of statistical hypothesis.
3)
Correct modeling of error structure.
4)
Assessment of the efficacy of statistical methods and material accounting.
5)
Review of the philosophy and objectives of safeguards.
1.
Terminology The terminology pertaining to statistical concepts in material accounting is frequently at variance with established terminology used by the statistical community and the public at large.
The " nuclear" or NRC terminology is less precise than the terminology commonly used by statisticians.
Three issues are discussed with respect to terminology.
a)
Should NRC as a public agency have its own idiosyncractic jargon when good established terminology exists?
b)
Is terminology a " substantive" issue [ quotes are NRC jargon]?
c)
Will a change in terminology cause more confusion than it will alleviate?
Point (a) practically answers itself.
The public as well as persons within the nuclear field should not be burdened with having to learn idiosyncractic jargon and having to deal with the ambiguities of this jargon.
This writer cannot make categorical statements with respect to point (b) because of lack of on hands experience with material accounting.
His general experience as a consulting statistician in NRC is:
34
In simple matters often the appropriate conclusion is drawn despite bad terminology; occasionally a bad conclusion is drawn or the essence of a problem is misunderstood because of bad terminology.
In complex matters good terminology is essential if serious mistakes are to be avoided; ID problems range from the simple to the very complex.
While a change in terminology may cause some temporary discomfort, it seems unlikely that it will cause appreciable confusion over the longer term.
2.
Tests of Hyoothesis Tests of hypothesis will be discussed under several subheadings:
a)
Adoption of the formalism of tests of hypothesis.
b)
Interpretation of " Acceptance of the null hypothesis."
c)
Choice of statistical test in terms of its detection capability rather than its false alarm rate.
d)
Model for test of hypothesis.
e)
Cumulative diversions.
a.
Adoption of the Formalism of Test of Hypothesis This aspect is not a highly crucial issue.
It arises from the fact that tests of hypothesis and confidence intervals are complementary notions of the same mathematical structure.
Usually one is easily converted into the other.
Currently the phraseology of tests of hypothesis and of confidence intervals is used, approximated, or misused.
This writer favors the formulation and language of tests of hypothesis for dichotomized decisions based on ID such as:
Report required or not required, shut plant or do not shut plant.
The test of hypothesis itself is a dichotomized decision.
b.
Interpretation of " Acceptance of the Null Hypothesis" Many statisticans and lay practioners of statistics use phrases such as "the hypothesis is accepted" when the statistical test failed to reject the hypothesis.
The statistical test cannot prove the truth of the null hypothesis.
The shorthand designation "the hypothesis is accepted,"
when no sufficient evidence for rejection of the hypothesis exists, is unfortunate and often leads to misinterpretation.
If strong prior attitudes exist (e.g., that everything is OK with nuclear energy) an overinterpretation of the acceptance of the null hypothesis becomes more likely.
Emphasis by NRC to guard against this pitfall is indicated.
35
c.
Choice of Statistical Test in Terms of Detection Capability Rather than False Alarm Rate This issue relates to the basis of safeguards and its difficulties relate to defining specific safeguards objectives rather than to statistical methods.
Specifically, a low false alarm rate does not do anything to protect the public health or safety.
In order to protect the public against diversion, undetected by safeguards exclusive of material accounting, material accounting must have a high probability of detecting diversions which might seriously threaten public health and safety.
It is possible to construct a statistical test with specified probability of alarm for a specified amount diverted.
In fact a statistical test can accommodate (at least conservatively) many specified amounts diverted with corresponding probabilities of detection; e.g., 2 kg of plutonium with probability.95, and 3 kg with probability.99.
Such statistical tests would also moderate the problem associated with unwarranted interpretations of the " acceptance of the null hypothesis."
d.
Model for Test of Hypothesis The statistical test for a single ID is usually formulated as if ID were distributed normally with known standard deviation.
Because ID is a sum of several random variables, in most situations the assumption of an approximately normal distribution may be well justified.
However, some further investigation of the distribution of ID may be indicated.
The standard deviation of ID is never exactly known but must be estimated.
Some components of the standard deviation or variance can be estimated by proper statistical methods; others may have to be assumed arbitrarily.
The problem of the appropriate standard deviation gets further compounded because of the complex error structure discussed in Section 3.
How to deal with the standard deviation deserves further (and fairly sophisticated) study.
e.
Cumulative Diversions Much of the current material accounting concerns itself with individual inventory periods at a single facility.
Obviously small diversions can be combined to form a serious threat.
The combinations may be over inventory periods at a single facility or over separate facilities or over both.
Tests of statistical hypotheses can be formulated to deal with the cumulation of separate diversions.
NRC is currently considering several statistical procedures to deal with the cumulative aspects of diversion.
Even if a procedure is ultimately judged good from the theoretical point of view, its implementation may not provide sufficient sensitivity to diversion while maintaining low false alarm rates.
3.
Correct Modeling of Error Structure
.The measurement of inventory is a complex and in part an indirect process subject to many sources of error.
The magnitude or standard deviation of some errors is not known.
The statistical nature of the errors differ.
To use engineering jargon, some are high frequency and others are low frequency.
The low frequency errors may represent slow drifts in instruments or ambient 36
conditions.
The drift may be so slow that errors from the same instrument may be correlated over consecutive inventory periods.
Even the high frequency errors from several different instruments may be cross-correlated because of common ambient conditions, procedures, or human errors.
These correlations affect the standard deviation of ID, and the factors underlying the correlations may affect what one chooses to treat as bias.
The modeling and determination of the appropriate standard deviation and bias of an inventory difference is a formidable problem on which progress can be expected with appropriate effort.
4.
Assessment of the Efficacy of Statistical Methods and Material Accounting One needs to consider the limitations of statistical methods, their direct and indirect costs, and alternatives.
Difficulties with respect to modeling and sensitivity were addressed in previous sections.
These difficulities, despite efforts to overcome them, may prove to be inherent limitations.
Other limitations relate to:
a)
Data may be falsified in conjunction with a diversion.
Unless a rather foolproof bookkeeping system exists, statistics per se cannot protect against such action.
b)
If a diversion is detected by statistical methods, what is the likelihood of recovery or protection of public health and safety?
c)
Does the material accounting program make a facility more vulnerable to diversion?
Cost problems are obvious; I have nothing original to offer on this subject.
The next paragraph will touch on how one might deal with costs.
Stricter safeguards are an obvious alternative to material accounting.
Game theory is advocated by some people as an alternative to statistical tests of hypothesis because of its broader approach and flexibility.
Game theory explicitly considers:
possible strategies of diverter and defender.
a utility or cost function, a mathematical summary of benefits and costs associated with diversions and protection against diversion.
While in principle the game theory approach is appealing, there are practical difficulties:
One has to deal with rather complex mathematics and extensive computations.
There may be inherent difficulities in developing " valid" utility functions.
The modeling deficiencies and the difficulties relating to cumulative diversions are not resolved by casting the problem as a game.
In fact, the sensitivity of a game theory solution to deficiencies in the modeling is much harder to determine than the sensitivity under the test of hypothesis approach.
37
As yet the game theory approach has not been sufficiently explored for critical determination of its usefulness.
5.
Review of the Philosophy and Objectivas of Safeguards As pointed out before, the philosophy and objectives of safeguards should be the starting point as well as the end point of material accounting procedures.
The philosophy and objectives overlap with the assessment issues discussed in the previous section.
The latter can be compacted into three questions:
a)
Can one fulfill the objectives to a satisfactory degree by statistical methods?
b)
Are there better or cheaper ways to achieve the objectives?
c)
What is the appropriate allocation, if any, of resources to the statistical methods and the necessary measurement system?
To cite some other issues:
a)
Is detection of small diversions important? They could be precursors of future diversions.
They could enable a diverter to learn how to handle and process nuclear material.
And, as pointed out before, small diversions can be combined into serious threats.
b)
What is the psychological value of material accounting irrespective of its effectiveness? Is it important to the public? Does it deter potential diverters?
38
REFERENCES 1.
Ahearne, John F., NRC, letter to Senator Gary Hart, Chairman Subcommittee on Nuclear Regulation, Committee on Environment and Public Works, U.S.
Senate,
Subject:
Commission Response to Questions Transmitted in the November 1, 1979 Letter to Dr. Hendrie, March 5, 1980.
2.
Altman, Willard D., Claudia G. Stetler, and John Hockert, NRC, memorandum to Eugene Perchonok, "CEP Input to Commission Paper on the Material Control and Material Accounting Plan," October 26, 1978.
3.
American National Standards, ANSI N15.16, "American National Standard Limit of Error Concepts and Principles of Calculation in Nuclear Materials Control," 1974.
4.
Bulmer, M. G.,
Principles of Statistics, Dover, New York, N.Y.,
1979.
5.
Chilk, Samuel J., NRC, memorandum to William J. Dircks, " Staff Requirements -
Briefing on Two Approaches to the Treatment of Inventory Differences in Nuclear Material Accounting, 10:00 A.M., Monday, March 31, 1980, Room 550 East-West Towers, Bethesda, Maryland (0 pen to Public Attendance)," April 4, 1980.
6.
General Services Administration, " Material Balance, Inventory, and Record Requirements," Code of Federal Regulations, Title 10 Energy, Part 70, January 1,1980.
7.
Jaech, John L., Statistical Methods in Nuclear Material Accounting, Technical Information Center, Office of Information Services, United States Atomic Energy Commission, TID-26298, 1973.
8.
Lawrence Berkeley Laboratory, NUREG/CR-0083, " Material Unaccounted for Performance Analysis of a U.S. Nuclear Regulatory Commission Licensee,"
January 1979.
9.
Lumb, R.
F., " Report to the Atomic Energy Commission by the Ad Hoc Advisory Panel on Safeguarding Special Nuclear Material," TID-25390, March 10, 1967.
10.
Mendenhall & Ott, Understanding Statistics, Second Edition, Duxbury Press, North Scituate, Massachusetts, 1976.
11.
Moore, Roger H., "Some Funny Things Happened on the Way to the Limit-of-Error Standard," Journal of the Institute of Nuclear Materials Management, Vol. III, No. 3, 1974.
12.
Moore, Roger H., "Some Thoughts on 'Some Thoughts on Random Errors, Systematic Errors, and Biases' by John L. Jaech," Journal of the Institute of Nuclear Materials Management, Vol. IV, No. 1, 1975.
39
13.
Page, R. G., Atomic Energy Commission, letter to Licensees,
Subject:
Guide-lines Indicating Appropriate Licensee Action Under Specific Conditions of Excessive MUF, December 6, 1974.
14.
Perchonok, Eugene, NRC, memorandum to Division of Safeguards Staff, " Safe-guards Objectives," June 10, 1976.
15.
U.S. Atomic Energy Commission, Regulatory Guide 5.13, " Conduct of Nuclear Material Physical Inventories," November 1973.
16.
U.S. Atomic Energy Commission, Regulatory Guide 5.18, " Limit of Error Concepts and Principies of Calculation in Nuclear Materials Control,"
January 1974.
17.
U.S. Atomic Energy Commission, Regulatory Guide 5.22, " Assessment of the Assumption of Normality (Employing Individual Observed Values)," April 1974.
18.
U.S. Atomic Energy Commission, Regulatory Guide 5.33, " Statistical Evaluation of Material Unaccounted for," June 1974.
19.
U.S. Nuclear Regulatory Commission, Final Draft, " Material Control and Accounting Task Force Report," Vol. I, August 1977.
20.
U.S. Nuclear Regulatory Commission, NUREG-0350, " Report on Strategic Special Nuclear Material Inventory Differences," August 1977.
21.
U.S. Nuclear Regulatory Commission, NUREG-0450, " Report of the Material Control and Material Accounting Task Force," April 1978.
22.
U.S. Nuclear Regulatory Commission, Nuclear Material Safeguards Status Report (White Bock), June 1979.
(Confidential) 23.
U.S. Nuclear Regulatory Commission, " Comprehensive Evaluation Program Report On B&W Lynchburg," July 1979.
(Confidential) 24.
U.S. Nuclear Regulatory Commission, NUREG-0430, " Licensed Fuel Status Report," Vol. I, Nos. 2-4, March 1979-January 1980.
25.
U.S. Nuclear Regulatory Commission, NUREG-0627, "A Safeguards Study of the Nuclear Materials and Equipment Corporation Uranium Processing Plant, Apollo, Pennsylvania," Executive Summary, January 1980.
26.
U.S. Nuclear Regulatory Commission, Statement of Work for MC&A Example System Development, Contract FIN #82134 with Pacific Northwest Laboratories, May 1, 1980.
27.
Wimpey, Frank and George Orlov, Development of Improved Techniques for Analyzing Material Control and Accounting Data, Final Report, Report SAI 81-206-WA, June 30, 1980, Science Applications Inc., McLean, Virginia 22102.
40
/
UNITED STATES
{
E NUCLEAR REGULATORY COMMISSION n
t WASHING TON. D. C. 20G55 o..
4
\\;
/
ON27 G50 MEMORidlDUM FOR: Robert F. Burnett, Director Division of Safeguards, MSS Norman M. Haller, Director Office of Management and Program Analysis FROM:
Lee R. Abramson, Acting Chief Applied Statistics Branch, MPA Theodore S. Sherr, Chief MC&A Development Branch Division of Safeguards, HMSS
SUBJECT:
STATUS OF COMMISSION REQUESTED REPORT ON THE STATIST TREATMENT OF INVENTORY DIFFERENCES The various sections of the most recent draft of the subject report were distributed for cc.: ment between May 29 and June 19. Although all of the comments have not yet been received, we do not expect that any additional comments will raise any new substantive issues.
The MPA/NMSS writing team is now in the process of reviewing the comments and revising the draft.
However, there are a number of comments which the writing team believes, in their best judgment, should not be incorporated into the text.
In order to deal with this impasse, we propose the following procedure:
1.
The writing team will develop the next draft, incorporating as many of the comments as possible, with indications in the margin where comments were mafe but were not incorporated.
These unincorporated comments will be compiled, with crossreferences to the text, together with an explana-tion of the reasons for their nonacceptance by the team.
The writing team draft will be reviewed by a technical editor in the 2.
Special Projects Branch of MPA.
3.
The edited draft together with the compilation of comments will be dis-tributed to the staff reviewers for ccmments on any new text and rEas-sessment of their unincorporated commants in light of the explanations provided.
4.
The edited draft together with an updated compilation of unincorporated comments and associated explanations will be provided to ybu for review.
In this context, we will request your direction as to which, if any, of the unincorporated comnents should be reflected in the taxt, as well as any additional changes that you may have. MPA and HMSS management de-cisions should be coordinated to eliminate, or at least minimize, con-flicting directions. Notwithstanding, if there remain any difference in MPA and HMSS management views, these will be identified accordingly in the text.
Y b
.(
Robert F. Burnett and florman fi. Haller 5.
As a result of your direction, certain positions of writing team members and reviewers may not be reflected in the text of the report.
These in-dividual staff members will be given an opportunity to present their views on these aspects of the resulting text in an appendix.
6.
The writing team will provide you with a final draft for transmittal to the EDO.
To assure that the text is revised to your satisfaction in a timely manner, revisions to the text will be provided to you as they are developed by the team.
Saced on discussion with the writing team, the following schedule appears to be realistic:
July 14:
Writing team to complete revised draft, compile comments, and provide explanations.
July 15:
Draft to technical editor.
July 21:
Edited draft and unincorporated comments distri-buted to reviewers.
Jc!y 28:
Comments on new text and reassessed comments due.
July 31:
Draft and updated unincorporated coc=ents to Burnett/Haller.
August 7:
Burnett/Haller direction for changes to draft.
August 7-22:
liriting team revises draft according to Burnett/
Haller direction and prepares appendix reflecting individual concerns about residual unincorporated connents and changes to July J1 draft.
August 25:
Final report to MPA/flMSS management.
August 27:
Transmittal Commission paper and final report to EDO.
August 29:
Paper to Commission.
Please advise if the proposed procedure and schedule are acceptable.
If so, we will request an extension of the current due date of July 3 to August 29.
/
1.ee R. Abramson
~fm. (! x
- L Theodorc S. Sherr cc: See Page 3