>>
<<
ISO 13528:2022(en)
ISO - Cover page
Foreword
0 Introduction
1 Scope
2 Normative references
3 Terms and definitions
4 General principles
 4.1 General requirements for statistical methods
 4.2 Basic model
 4.3 General approaches for the evaluation of performance
5 Guidelines for the statistical design of proficiency testing schemes
 5.1 Introduction to the statistical design of proficiency testing schemes
 5.2 Basis of a statistical design
 5.3 Considerations for the statistical distribution of results
 5.4 Considerations for small numbers of participants
 5.5 Guidelines for choosing the reporting format
  5.5.1 General requirements for reporting format
  5.5.2 Reporting of replicate measurements
  5.5.3 Reporting of ‘less than’ or ‘greater than’ a limit (censored data)
  5.5.4 Number of significant digits
6 Guidelines for the initial review of proficiency testing items and results
 6.1 Homogeneity and stability of proficiency test items
 6.2 Considerations for different measurement methods
 6.3 Blunder removal
 6.4 Visual review of data
 6.5 Robust statistical methods
 6.6 Outlier techniques for individual results
7 Determination of the assigned value and its standard uncertainty
 7.1 Choice of method of determining the assigned value
 7.2 Determining the uncertainty of the assigned value
 7.3 Formulation
 7.4 Certified reference material
 7.5 Results from one laboratory
 7.6 Consensus value from expert laboratories
 7.7 Consensus value from participant results
 7.8 Comparison of the assigned value with an independent reference value
8 Determination of criteria for evaluation of performance
 8.1 Approaches for determining evaluation criteria
 8.2 By perception of experts
 8.3 By experience from previous rounds of a proficiency testing scheme
 8.4 By use of a general model
 8.5 Using the repeatability and reproducibility standard deviations from a previous collaborative study of precision of a measurement method
 8.6 From data obtained in the same round of a proficiency testing scheme
 8.7 Monitoring interlaboratory agreement
9 Calculation of performance statistics
 9.1 General considerations for determining performance
 9.2 Limiting the uncertainty of the assigned value
 9.3 Estimates of deviation (measurement error)
 9.4 z scores
 9.5 z′ scores
 9.6 Zeta scores (ζ)
 9.7 En scores
 9.8 Evaluation of participant uncertainties in testing
 9.9 Combined performance scores
10 Graphical methods for describing performance scores
 10.1 Application of graphical methods
 10.2 Histograms of results or performance scores
 10.3 Kernel density plots
 10.4 Bar-plots of standardized performance scores
 10.5 Youden plot
 10.6 Plots of repeatability standard deviations
 10.7 Split samples
 10.8 Graphical methods for combining performance scores over several rounds of a proficiency testing scheme
11 Design and analysis of qualitative proficiency testing schemes (including nominal and ordinal properties)
 11.1 Types of qualitative data
 11.2 Statistical design
 11.3 Assigned values for qualitative proficiency testing schemes
 11.4 Performance evaluation and scoring for qualitative proficiency testing schemes
Annex A (normative) Symbols
Annex B (informative) Homogeneity and stability of proficiency test items
Annex C (informative) Robust analysis
Annex D (informative) Additional guidance on statistical procedures
Annex E (informative) Illustrative examples
Annex F (Informative) Example of computer code for plotting and resampling analysis (“bootstrapping”) of PT results
Bibliography
30mm
10mm
15mm
15mm
10mm
17mm

International

Standard

ISO 13528
Statistical methods for use in proficiency testing by interlaboratory comparison
Méthodes statistiques utilisées dans les essais d'aptitude par comparaison interlaboratoires
Reference number
ISO 13528:2022(en)
Third edition
2022-08
30mm
10mm
15mm
15mm
10mm
17mm
30mm
10mm
15mm
15mm
10mm
17mm

ContentsPage

Foreword

ISO (the International Organization for Standardization) is a worldwide federation of national standards bodies (ISO member bodies). The work of preparing International Standards is normally carried out through ISO technical committees. Each member body interested in a subject for which a technical committee has been established has the right to be represented on that committee. International organizations, governmental and non-governmental, in liaison with ISO, also take part in the work. ISO collaborates closely with the International Electrotechnical Commission (IEC) on all matters of electrotechnical standardization.
The procedures used to develop this document and those intended for its further maintenance are described in the ISO/IEC Directives, Part 1. In particular the different approval criteria needed for the different types of ISO documents should be noted. This document was drafted in accordance with the editorial rules of the ISO/IEC Directives, Part 2 (see www.iso.org/directives).
Attention is drawn to the possibility that some of the elements of this document may be the subject of patent rights. ISO shall not be held responsible for identifying any or all such patent rights. Details of any patent rights identified during the development of the document will be in the Introduction and/or on the ISO list of patent declarations received (see www.iso.org/patents).
Any trade name used in this document is information given for the convenience of users and does not constitute an endorsement.
For an explanation on the meaning of ISO specific terms and expressions related to conformity assessment, as well as information about ISO's adherence to the WTO principles in the Technical Barriers to Trade (TBT) see the following URL: Foreword - Supplementary information
The committee responsible for this document is ISO/TC 69, Applications of statistical methods, Subcommittee SC 6, Measurement methods and results.
This third edition of ISO 13528 cancels and replaces the second edition (ISO 13528:2015), of which it constitutes an minor revision. The changes are as follows:
notes have been added to 10.1, 10.4.3 and 10.5.3 to draw attention to additional graphical techniques that can assist in meeting the provisions of 10.1;
Formulae B.4 and B.8 have been corrected to use graphic s t 2 instead of graphic w t 2 ;
Formula B.16 has been corrected so that the term inside the square root is always non-negative;
in Table C.2, the correction factor associated with p = 2 has been corrected to read 0,3994;
additional literature references to the source of values in Table C.2 have been added to the Bibliography and referenced from Notes 1 and 2 of C.5.2.1;
font styles (Italic or Roman) have been amended throughout for consistency in formulae.

0 Introduction

0.1   The purposes of proficiency testing
Proficiency testing involves the use of interlaboratory comparisons to determine the performance of participants (which may be laboratories, inspection bodies, or individuals) for specific tests or measurements, and to monitor their continuing performance. There are a number of typical purposes of proficiency testing, as described in the Introduction to ISO/IEC 17043. These include the evaluation of laboratory performance, the identification of problems in laboratories, establishing effectiveness and comparability of test or measurement methods, the provision of additional confidence to laboratory customers, validation of uncertainty claims, and the education of participating laboratories. The statistical design and analytical techniques applied shall be appropriate for the stated purpose(s).
0.2   Rationale for scoring in proficiency testing schemes
A variety of scoring strategies is available and in use for proficiency testing. Although the detailed calculations differ, most proficiency testing schemes compare the participant’s deviation from an assigned value with a numerical criterion which is used to decide whether or not the deviation represents cause for concern. The strategies used for value assignment and for choosing a criterion for assessment of the participant deviations are therefore critical. In particular, it is important to consider whether the assigned value and criterion for assessing deviations should be independent of participant results, or should be derived from the results submitted. In this document, both strategies are provided for. However, attention is drawn to the discussion that will be found in Clauses 7 and 8 of the advantages and disadvantages of choosing assigned values or criteria for assessing deviations that are not derived from the participant results. It will be seen that in general, choosing assigned values and assessment criteria independently of participant results offers advantages. This is particularly the case for the criterion used to assess deviations from the assigned value – such as the standard deviation for proficiency assessment or an allowance for measurement error – for which a consistent choice based on suitability for a particular end use of the measurement results, is especially useful.
0.3   ISO 13528 and ISO/IEC 17043
This document provides support for the implementation of ISO/IEC 17043 particularly, on the requirements for the statistical design, validation of proficiency test items, review of results, and reporting summary statistics. ISO/IEC 17043:2010, Annex B, briefly describes the general statistical methods that are used in proficiency testing schemes. This document is intended to be complementary to ISO/IEC 17043, providing detailed guidance that is lacking in that document on particular statistical methods for proficiency testing.
The definition of proficiency testing in ISO/IEC 17043 is repeated in this document, with the notes that describe different types of proficiency testing and the range of designs that can be used. This document cannot specifically cover all purposes, designs, matrices and measurands. The techniques presented in this document are intended to be broadly applicable, especially for newly established proficiency testing schemes. It is expected that statistical techniques used for a particular proficiency testing scheme will evolve as the scheme matures; and the scores, evaluation criteria, and graphical techniques will be refined to better serve the specific needs of a target group of participants, accreditation bodies, and regulatory authorities.
This document incorporates published guidance for the proficiency testing of chemical analytical laboratories[32] but additionally includes a wider range of procedures to permit use with valid measurement methods and qualitative identifications. The revision of this document contains most of the statistical methods and guidance from the first edition, extended as necessary by the previously referenced documents and the extended scope of ISO/IEC 17043. ISO/IEC 17043 includes proficiency testing for individuals and inspection bodies, including ISO/IEC 17043:2010, Annex B, which includes considerations for qualitative results.
This document includes statistical techniques that are consistent with other International Standards, particularly those of TC69/SC6, notably the ISO 5725 series of standards on Accuracy: trueness and precision. The techniques are also intended to reflect other International Standards, where appropriate, and are intended to be consistent with ISO/IEC Guide 98-3 (GUM) and ISO/IEC Guide 99 (VIM).
0.4   Statistical expertise
ISO/IEC 17043 requires that in order to be competent, a proficiency testing provider shall have access to statistical expertise and shall authorize specific personnel to conduct statistical analysis. Neither ISO/IEC 17043 nor this document can specify further what that necessary expertise is. For some applications an advanced degree in statistics is useful, but usually the needs for expertise can be met by individuals with technical expertise in other areas, who are familiar with basic statistical concepts and have experience or training in the common techniques applicable to the analysis of data from proficiency testing schemes. If an individual is responsible for statistical design and/or analysis, it is very important that this person has experience with interlaboratory comparisons, even if that person has an advanced degree in statistics. Conventional advanced statistical training often does not include exercises with interlaboratory comparisons, and the unique causes of measurement error that occur in proficiency testing can seem obscure. The guidance in this document cannot provide all the necessary expertise to consider all applications, and cannot replace the experience gained by working with interlaboratory comparisons.
0.5   Computer software
Computer software that is needed for statistical analysis of proficiency testing data can vary greatly, ranging from simple spread sheet arithmetic for small proficiency testing schemes using known reference values to sophisticated statistical software used for statistical methods reliant on iterative calculations or other advanced numerical methods. Most of the techniques in this document can be accomplished by conventional spread sheet applications, perhaps with customised routines for a particular proficiency testing scheme or analysis; some techniques will require computer applications that are freely available. In all cases, the users are expected to verify the validity and accuracy of their calculations, especially when special routines have been entered by the user. However, even when the techniques in this document are appropriate and correctly implemented by adequate computer applications, they cannot be applied without attention from an individual with technical and statistical expertise that is sufficient to understand the nature of the applications and the statistical assumptions, and to identify and investigate anomalies that can occur in any round of a proficiency testing scheme.
30mm
10mm
15mm
15mm
10mm
17mm

International StandardISO 13528:2022(en)
Statistical methods for use in proficiency testing by interlaboratory comparison

1Scope

This document provides detailed descriptions of statistical methods for proficiency testing providers to use to design proficiency testing schemes and to analyse the data obtained from those schemes. This document provides recommendations on the interpretation of proficiency testing data by participants in such proficiency testing schemes and by accreditation bodies.
The procedures in this document can be applied to demonstrate that the measurement results obtained by laboratories, inspection bodies, and individuals meet specified criteria for acceptable performance.
This document is applicable to proficiency testing where the results reported are either quantitative measurements or qualitative observations on test items.
NOTEThe procedures in this document can also be applied for the assessment of expert opinion where the opinions or judgments are reported in a form which can be compared objectively with an independent reference value or a consensus statistic. For example, when classifying proficiency test items into known categories by inspection - or in determining by inspection whether proficiency test items arise, or do not arise, from the same original source - and the classification results are compared objectively, the provisions of this document that relate to nominal (qualitative) properties can be applied.

2Normative references

The following documents are referred to in the text in such a way that some or all of their content constitutes requirements of this document. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies.
ISO 3534-1, Statistics — Vocabulary and symbols — Part 1: General statistical terms and terms used in probability
ISO 3534-2, Statistics — Vocabulary and symbols — Part 2: Applied statistics
ISO 5725-1, Accuracy (trueness and precision) of measurement methods and results — Part 1: General principles and definitions
ISO/IEC 17043, Conformity assessment — General requirements for proficiency testing
ISO Guide 30, Reference materials — Selected terms and definitions
ISO/IEC Guide 99, International vocabulary of metrology — Basic and general concepts and associated terms (VIM)

3Terms and definitions

For the purposes of this document, the terms and definitions given in ISO 3534-1, ISO 3534-2, ISO 5725-1, ISO/IEC 17043, ISO/IEC Guide 99, ISO Guide 30, and the following apply. In the case of differences between these references on the use of terms, definitions in ISO 3534-1 ISO 3534-2 apply. Mathematical symbols are listed in Annex A.
ISO and IEC maintain terminology databases for use in standardization at the following addresses:
ISO Online browsing platform: available at https://www.iso.org/obp
IEC Electropedia: available at https://www.electropedia.org/
3.1
interlaboratory comparison
organization, performance and evaluation of measurements or tests on the same or similar items by two or more laboratories in accordance with predetermined conditions
3.2
proficiency testing
evaluation of participant performance against pre-established criteria by means of interlaboratory comparisons (3.1)
Note 1 to entry: For the purposes of this document, the term “proficiency testing” is taken in its widest sense and includes, but is not limited to:
quantitative scheme — where the objective is to quantify one or more measurands for each proficiency test item;
qualitative scheme — where the objective is to identify or describe one or more qualitative characteristics of the proficiency test item;
sequential scheme — where one or more proficiency test items are distributed sequentially for testing or measurement and returned to the proficiency testing provider at intervals;
simultaneous scheme — where proficiency test items are distributed for concurrent testing or measurement within a defined time period;
single occasion exercise — where proficiency test items are provided on a single occasion;
continuous scheme — where proficiency test items are provided at regular intervals;
sampling — where samples are taken for subsequent analysis and the purpose of the proficiency testing scheme includes evaluation of the execution of sampling; and
data interpretation — where sets of data or other information are furnished and the information is processed to provide an interpretation (or other outcome).
3.3
assigned value
value attributed to a particular property of a proficiency test item
3.4
standard deviation for proficiency assessment
measure of dispersion used in the evaluation of results of proficiency testing (3.2)
Note 1 to entry: This can be interpreted as the population standard deviation of results from a hypothetical population of laboratories performing exactly in accordance with requirements.
Note 2 to entry: The standard deviation for proficiency assessment applies only to ratio and interval scale results.
Note 3 to entry: Not all proficiency testing schemes evaluate performance based on the dispersion of results.
[SOURCE: ISO/IEC 17043:2010, modified — In the definition “based on the available information” has been deleted. Note 1 to the entry has been added, and Notes 2 and 3 have been slightly edited.]
3.5
measurement error
measured quantity value minus a reference quantity value
[SOURCE: ISO/IEC Guide 99:2007, modified — Notes have been deleted.]
3.6
maximum permissible error
extreme value of measurement error (3.5), with respect to a known reference quantity value, permitted by specifications or regulations for a given measurement, measuring instrument, or measuring system
[SOURCE: ISO/IEC Guide 99:2007, modified — Notes have been deleted.]
3.7
z score
standardized measure of performance, calculated using the participant result, assigned value (3.3) and the standard deviation for proficiency assessment (3.4)
Note 1 to entry: A common variation on the z score, sometimes denoted z’ (commonly pronounced z-prime), is formed by combining the uncertainty of the assigned value with the standard deviation for proficiency assessment before calculating the z score.
3.8
zeta score
standardized measure of performance, calculated using the participant result, assigned value (3.3) and the combined standard uncertainties for the result and the assigned value (3.3)
3.9
proportion of allowed limit score
standardized measure of performance, calculated using the participant result, assigned value (3.3) and the criterion for measurement error (3.5) in a proficiency test
Note 1 to entry: For single results, performance can be expressed as the deviation from the assigned value (D or D %).
3.10
action signal
indication of a need for action arising from a proficiency test result
EXAMPLE        A z score in excess of 2 is conventionally taken as an indication of a need to investigate possible causes; a z score of 3 or greater is conventionally taken as an action signal indicating a need for corrective action.
3.11
consensus value
value derived from a collection of results in an interlaboratory comparison (3.1)
Note 1 to entry: The phrase ‘consensus value’ is typically used to describe estimates of location and dispersion derived from participant results in a round of a proficiency testing scheme, but may also be used to refer to values derived from results of a specified subset of such results or, for example, from a number of expert laboratories.
3.12
outlier
member of a set of values which is inconsistent with other members of that set
Note 1 to entry: An outlier can arise by chance from the expected population, originate from a different population, or be the result of an incorrect recording or other blunder.
Note 2 to entry: Many proficiency testing schemes use the term outlier to designate a result that generates an action signal. This is not the intended use of the term. While outliers will usually generate action signals, it is possible to have action signals from results that are not outliers.
[SOURCE: ISO 5725‑1:1994, modified — The Notes to the entry have been added.]
3.13
participant
laboratory, organization, or individual that receives proficiency test items and submits results for review by the proficiency testing (3.2) provider
3.14
proficiency test item
sample, product, artefact, reference material, piece of equipment, measurement standard, data set or other information used to assess participant (3.13) performance in proficiency testing (3.2)
Note 1 to entry: In most instances, proficiency test items meet the ISO Guide 30 definition of “reference material” (3.17).
3.15
proficiency testing provider
organization which takes responsibility for all tasks in the development and operation of a proficiency testing (3.2) scheme
3.16
proficiency testing scheme
proficiency testing (3.2) designed and operated in one or more rounds for a specified area of testing, measurement, calibration or inspection
Note 1 to entry: A proficiency testing scheme might cover a particular type of test, calibration, inspection or a number of tests, calibrations or inspections on proficiency test items.
3.17
reference material
RM
material, sufficiently homogeneous and stable with respect to one or more specified properties, which has been established to be fit for its intended use in a measurement process
Note 1 to entry: RM is a generic term.
Note 2 to entry: Properties can be quantitative or qualitative, e.g. identity of substances or species.
Note 3 to entry: Uses may include the calibration of a measuring system, assessment of a measurement procedure, assigning values to other materials, and quality control.
[SOURCE: ISO Guide 30:2015, modified —Note 4 has been deleted.]
3.18
certified reference material
CRM
reference material (RM) (3.17) characterized by a metrologically valid procedure for one or more specified properties, accompanied by an RM certificate that provides the value of the specified property, its associated uncertainty, and a statement of metrological traceability
Note 1 to entry: The concept of value includes a nominal property or a qualitative attribute such as identity or sequence. Uncertainties for such attributes may be expressed as probabilities or levels of confidence.
[SOURCE: ISO Guide 30:2015, modified —Notes 2, 3 and 4 have been deleted.]
30mm
10mm
15mm
15mm
10mm
17mm

Bibliography
[1]
ISO 5725-2, Accuracy (trueness and precision) of measurement methods and results — Part 2: Basic method for the determination of repeatability and reproducibility of a standard measurement method
[2]
ISO 5725-3, Accuracy (trueness and precision) of measurement methods and results — Part 3: Intermediate measures of the precision of a standard measurement method
[3]
ISO 5725-4, Accuracy (trueness and precision) of measurement methods and results — Part 4: Basic methods for the determination of the trueness of a standard measurement method
[4]
ISO 5725-5, Accuracy (trueness and precision) of measurement methods and results — Part 5: Alternative methods for the determination of the precision of a standard measurement method
[5]
ISO 5725-6, Accuracy (trueness and precision) of measurement methods and results — Part 6: Use in practice of accuracy values
[6]
ISO 7870-2, (2013), Control charts — Part 2: Shewhart control charts
[7]
ISO 11352, Water quality — Estimation of measurement uncertainty based on validation and quality control data
[8]
ISO 11843-1, Capability of detection — Part 1: Terms and definitions
[9]
ISO 11843-2, Capability of detection — Part 2: Methodology in the linear calibration case
[10]
ISO 16269-4, Statistical interpretation of data — Part 4: Detection and treatment of outliers
[11]
ISO/IEC 17011, Conformity assessment — Requirements for accreditation bodies accrediting conformity assessment bodies
[12]
ISO/IEC 17025, General requirements for the competence of testing and calibration laboratories
[13]
ISO Guide 35, Reference materials — Guidance for characterization and assessment of homogeneity and stability
[14]
ISO/IEC Guide 98-3, Uncertainty of measurement — Part 3: Guide to the expression of uncertainty in measurement (GUM:1995)
[15]
Analytical Method Committee Royal Society of Chemistry Accred Qual Assur. 2010, 15 pp. 7379
[16]
CCQM Guidance note: Estimation of a consensus KCRV and associated Degrees of Equivalence. Version 10. Bureau International des Poids et Mesures, Paris (2013)
[17]
Davison A.C., & Hinkley D.V. Bootstrap Methods and Their Application. Cambridge University Press, 1997
[18]
Efron B., & Tibshirani R. An Introduction to the Bootstrap. Chapman & Hall, 1993
[19]
Lamberty A, Schimmel H, & Pauwels J The study of the stability of reference materials by isochronous measurements. Fres J, Anal Chem. 1998, 360 pp. 359-361
[20]
Gower J.C. A general coefficient of similarity and some of its properties. Biometrics. 1971, 27 (4) pp. 857871
[21]
Helsel D.R. Nondetects and data analysis: statistics for censored environmental data. Wiley Interscience, 2005
[22]
Horwitz W. Evaluation of analytical methods used for regulations of food and drugs. Anal. Chem. 1982, 54 pp. 67A76A
[23]
Jackson J.E. Quality control methods for two related variables. Industrial Quality Control. 1956, 7 pp. 26
[24]
Kuselman I., & Fajgelj A. IUPAC/CITAC Guide: Selection and use of proficiency testing schemes for a limited number of participants—chemical analyticallaboratories (IUPAC Technical Report). Pure Appl. Chem. 2010, 82 (5) pp. 10991135
[25]
Maronna R.A., Martin R.D., & Yohai V.J. Robust Statistics: Theory and methods. John Wiley & Sons Ltd, Chichester, England, 2006
[26]
Müller C.H., & Uhlig S. Estimation of variance components with high breakdown point and high efficiency; Biometrika; 88: Vol. 2, pp. 353-366, 2001.
[27]
Rousseeuw P.J., & Verboven S. Comput. Stat. Data Anal. 2002, 40 pp. 741758
[28]
Scott D.W. Multivariate Density Estimation: Theory, Practice, and Visualization. Wiley, 1992
[29]
Sheather S.J., & Jones M.C. A reliable data-based bandwidth selection method for kernel density estimation. J. R. Stat. Soc., B. 1991, 53 pp. 683690
[30]
Silverman B.W. Density Estimation. Chapman and Hall, London, 1986
[31]
Thompson M. Analyst (Lond.). 2000, 125 pp. 385386
[32]
Thompson M., Ellison S.L.R., & Wood R. “The International Harmonized Protocol for the proficiency testing of analytical chemistry laboratories” (IUPAC Technical Report). Pure Appl. Chem. 2006, 78 (1) pp. 145196
[33]
Thompson M., Willetts P., Anderson S., Brereton P., & Wood R. Collaborative trials of the sampling of two foodstuffs, wheat and green coffee. Analyst (Lond.). 2002, 127 pp. 689691
[34]
Uhlig S. Robust estimation of variance components with high breakdown point in the 1-way random effect model. In: Kitsos, C.P. and Edler, L.; Industrial Statistics; Physica, S. 65-73, 1997.
[35]
Uhlig S. Robust estimation of between and within laboratory standard deviation measurement results below the detection limit, Journal of Consumer Protection and Food Safety, 2015
[36]
van Nuland Y. ISO 9002 and the circle technique. Qual. Eng. 1992, 5 pp. 269291
[37]
[38]
ISO 16269-4, Statistical interpretation of data — Part 4: Detection and treatment of outliers
[39]
Robouch P., Naji Y., & Vermaercke P. The “Naji Plot”, a simple graphical tool for the evaluation of inter-laboratory comparisons, in Richter D., Wöger W., & Hässelbarth W. (eds.), Data analysis of key comparisons, Braunschweig and Berlin, 2003, ISBN 3-89701-933-3.
[40]
Ellison S. L. R. Applications of robust estimators of covariance in examination of inter-laboratory study data. Analytical methods 2019, 11, 2639-2649, https://doi.org/10.1039/C8AY02724B
[41]
Maechler M., Rousseeuw P., Croux C., Todorov V., Ruckstuhl A., Salibian-Barrera M., & et al c("Eduardo", "L. T.") Conceicao and Maria Anna di Palma (2021). robustbase: Basic Robust Statistics R package version 0.93-7. URL http://CRAN.R-project.org/package=robustbase
[42]
Christophe Croux and Peter J. Rousseeuw Time-Efficient Algorithms for Two Highly Robust Estimators of Scale," in Computational Statistics, Volume 1, eds. Y . Dodge and J. Whittaker, Heidelberg: Physika-Verlag, 41 1-428, 1992.
30mm
10mm
15mm
15mm
10mm
17mm
ICS 03.120.30
Price based on 93 pages
iso.org