M00002784
New product
BS PD ISO/IEC TR 24027:2021 Information technology. Artificial intelligence (AI). Bias in AI systems and AI aided decision making
standard by BSI Group, 11/19/2021
This document addresses bias in relation to AI systems, especially with regards to AI-aided decision-making. Measurement techniques and methods for assessing bias are described, with the aim to address and treat bias-related vulnerabilities. All AI system lifecycle phases are in scope, including but not limited to data collection, training, continual learning, design, testing, evaluation and use.
All current amendments available at time of purchase are included with the purchase of this document.
In stock
Warning: Last items in stock!
Availability date: 12/12/2021
PD ISO/IEC TR 24027:2021
PD ISO/IEC TR 24027:2021 PUBLISHED DOCUMENT
This Published Document is the UK implementation of ISO/IEC TR 24027:2021.
The UK participation in its preparation was entrusted to Technical
Committee ART/1, Artificial Intelligence.
A list of organizations represented on this committee can be obtained on request to its committee manager.
Contractual and legal considerations
This publication has been prepared in good faith, however no representation, warranty, assurance or undertaking (express or implied) is or will be made, and no responsibility or liability is or will be accepted by BSI in relation to the adequacy, accuracy, completeness or reasonableness of this publication. All and any such responsibility and liability is expressly disclaimed to the full extent permitted by the law.
This publication is provided as is, and is to be used at the
recipient’s own risk.
The recipient is advised to consider seeking professional guidance with
respect to its use of this publication.
This publication is not intended to constitute a contract. Users are responsible for its correct application.
This publication is not to be regarded as a British Standard.
© The British Standards Institution 2021 Published by BSI Standards Limited 2021
ISBN 978 0 539 16390 2
ICS 35.020
Compliance with a Published Document cannot confer immunity from legal obligations.
This Published Document was published under the authority of the
Standards Policy and Strategy Committee on 30 November 2021.
Amendments/corrigenda issued since publication
Date Text affected
TECHNICAL REPORT
First edition
2021-11
Technologie de l'information — Intelligence artificielle (IA) — Tendance dans les systèmes de l'IA et dans la prise de décision assistée par l'IA
Reference number ISO/IEC TR 24027:2021(E)
© ISO/IEC 2021
Foreword v
Introduction vi
Scope 1
Normative references 1
Terms and definitions 1
Artificial intelligence 1
Bias 2
Abbreviations 3
Overview of bias and fairness 3
General 3
Overview of bias 3
Overview of fairness 5
Sources of unwanted bias in AI systems 6
General 6
Human cognitive biases 7
General 7
Automation bias 7
Group attribution bias 8
Implicit bias 8
Confirmation bias 8
In-group bias 8
Out-group homogeneity bias 8
Societal bias 9
Rule-based system design 9
Requirements bias 10
Data bias 10
General 10
Statistical bias 10
Data labels and labelling process 11
Non-representative sampling 11
Missing features and labels 11
Data processing 12
Simpson's paradox 12
Data aggregation 12
Distributed training 12
Other sources of data bias 12
Bias introduced by engineering decisions 12
General 12
Feature engineering 12
6.4.3 Algorithm selection 13
Hyperparameter tuning 13
Informativeness 14
Model bias 14
Model interaction 14
Assessment of bias and fairness in AI systems 14
General 14
Confusion matrix 15
Equalized odds 16
Equality of opportunity 16
Demographic parity 17
Predictive equality 17
7.7 Other metrics 17
© ISO/IEC 2021 – All rights reserved
iii
Treatment of unwanted bias throughout an AI system life cycle 17
General 17
Inception 17
General 17
External requirements 18
Internal requirements 19
Trans-disciplinary experts 19
Identification of stakeholders 19
Selection and documentation of data sources 20
External change 20
Acceptance criteria 21
Design and development 21
General 21
Data representation and labelling 21
Training and tuning 22
Adversarial methods to mitigate bias 23
Unwanted bias in rule-based systems 24
Verification and validation 24
General 24
Static analysis of training data and data preparation 25
Sample checks of labels 25
Internal validity testing 25
External validity testing 25
User testing 26
Exploratory testing 26
Deployment 26
General 26
Continuous monitoring and validation 26
Transparency tools 27
Annex A (informative) Examples of bias 28
Annex B (informative) Related open source tools 31
Annex C (informative) ISO 26000 – Mapping example 32
Bibliography 36
iv © ISO/IEC 2021 – All rights reserved
ISO (the International Organization for Standardization) is a worldwide federation of national standards bodies (ISO member bodies). The work of preparing International Standards is normally carried out through ISO technical committees. Each member body interested in a subject for which a technical committee has been established has the right to be represented on that committee. International organizations, governmental and non-governmental, in liaison with ISO, also take part in the work. ISO collaborates closely with the International Electrotechnical Commission (IEC) on all matters of electrotechnical standardization.
The procedures used to develop this document and those intended for its further maintenance are described in the ISO/IEC Directives, Part 1. In particular, the different approval criteria needed for the different types of ISO documents should be noted. This document was drafted in accordance with the editorial rules of the ISO/IEC Directives, Part 2 (see www.iso.org/directives).
Attention is drawn to the possibility that some of the elements of this document may be the subject of patent rights. ISO shall not be held responsible for identifying any or all such patent rights. Details of any patent rights identified during the development of the document will be in the Introduction and/or on the ISO list of patent declarations received (see www.iso.org/patents).
Any trade name used in this document is information given for the convenience of users and does not constitute an endorsement.
For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and expressions related to conformity assessment, as well as information about ISO's adherence to the World Trade Organization (WTO) principles in the Technical Barriers to Trade (TBT), see www.iso.org/iso/foreword.html.
This document was prepared by Technical Committee ISO/IEC JTC 1 Information technology, Subcommittee SC 42, Artificial intelligence.
Any feedback or questions on this document should be directed to the user’s national standards body. A complete listing of these bodies can be found at www.iso.org/members.html.
© ISO/IEC 2021 – All rights reserved v
Bias in artificial intelligence (AI) systems can manifest in different ways. AI systems that learn patterns from data can potentially reflect existing societal bias against groups. While some bias is necessary to address the AI system objectives (i.e. desired bias), there can be bias that is not intended in the objectives and thus represent unwanted bias in the AI system.
Bias in AI systems can be introduced as a result of structural deficiencies in system design, arise from human cognitive bias held by stakeholders or be inherent in the datasets used to train models. That means that AI systems can perpetuate or augment existing bias or create new bias.
Developing AI systems with outcomes free of unwanted bias is a challenging goal. AI system function behaviour is complex and can be difficult to understand, but the treatment of unwanted bias is possible. Many activities in the development and deployment of AI systems present opportunities for identification and treatment of unwanted bias to enable stakeholders to benefit from AI systems according to their objectives.
Bias in AI systems is an active area of research. This document articulates current best practices to detect and treat bias in AI systems or in AI-aided decision-making, regardless of source. The document covers topics such as:
an overview of bias (5.2) and fairness (5.3);
potential sources of unwanted bias and terms to specify the nature of potential bias (Clause 6);
assessing bias and fairness (Clause 7) through metrics;
addressing unwanted bias through treatment strategies (Clause 8).
vi © ISO/IEC 2021 – All rights reserved
Scope
This document addresses bias in relation to AI systems, especially with regards to AI-aided decision- making. Measurement techniques and methods for assessing bias are described, with the aim to address and treat bias-related vulnerabilities. All AI system lifecycle phases are in scope, including but not limited to data collection, training, continual learning, design, testing, evaluation and use.
Normative references
ISO/IEC 229891), Information technology — Artificial intelligence — Artificial intelligence concepts and terminology
ISO/IEC 230532), Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)
3 Terms and definitions
For the purposes of this document, the following terms and definitions given in ISO/IEC 22989 and ISO/ IEC 23053 and the following apply.
ISO and IEC maintain terminological databases for use in standardization at the following addresses:
ISO Online browsing platform: available at https://www.iso.org/obp
IEC Electropedia: available at https://www.electropedia.org/
3.1 Artificial intelligence
3.1.1
maximum likelihood estimator
estimator assigning the value of the parameter where the likelihood function attains or approaches its highest value
Note 1 to entry: Maximum likelihood estimation is a well-established approach for obtaining parameter estimates where a distribution has been specified [for example, normal, gamma, Weibull and so forth]. These estimators have desirable statistical properties (for example, invariance under monotone transformation) and in many situations provide the estimation method of choice. In cases in which the maximum likelihood estimator is biased, a simple bias correction sometimes takes place.
[SOURCE: ISO 3534-1:2006, 1.35]
3.1.2
rule-based systems
knowledge-based system that draws inferences by applying a set of if-then rules to a set of facts
following given procedures
[SOURCE: ISO/IEC 2382:2015, 2123875]
Under preparation. Stage at the time of publication: ISO/DIS 22989:2021.
Under preparation. Stage at the time of publication: ISO/DIS 23053:2021.
© ISO/IEC 2021 – All rights reserved 1