New Reduced price! BS PD ISO/IEC TR 24027:2021 View larger

BS PD ISO/IEC TR 24027:2021

M00002784

New product

BS PD ISO/IEC TR 24027:2021 Information technology. Artificial intelligence (AI). Bias in AI systems and AI aided decision making

standard by BSI Group, 11/19/2021

Full Description

This document addresses bias in relation to AI systems, especially with regards to AI-aided decision-making. Measurement techniques and methods for assessing bias are described, with the aim to address and treat bias-related vulnerabilities. All AI system lifecycle phases are in scope, including but not limited to data collection, training, continual learning, design, testing, evaluation and use.

All current amendments available at time of purchase are included with the purchase of this document.

More details

In stock

$137.46

-56%

$312.42

More info

PD ISO/IEC TR 24027:2021

PD ISO/IEC TR 24027:2021


Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making



PD ISO/IEC TR 24027:2021 PUBLISHED DOCUMENT



National foreword


This Published Document is the UK implementation of ISO/IEC TR 24027:2021.

The UK participation in its preparation was entrusted to Technical

Committee ART/1, Artificial Intelligence.

A list of organizations represented on this committee can be obtained on request to its committee manager.

Contractual and legal considerations

This publication has been prepared in good faith, however no representation, warranty, assurance or undertaking (express or implied) is or will be made, and no responsibility or liability is or will be accepted by BSI in relation to the adequacy, accuracy, completeness or reasonableness of this publication. All and any such responsibility and liability is expressly disclaimed to the full extent permitted by the law.

This publication is provided as is, and is to be used at the

recipient’s own risk.

The recipient is advised to consider seeking professional guidance with

respect to its use of this publication.

This publication is not intended to constitute a contract. Users are responsible for its correct application.

This publication is not to be regarded as a British Standard.

© The British Standards Institution 2021 Published by BSI Standards Limited 2021

ISBN 978 0 539 16390 2

ICS 35.020

Compliance with a Published Document cannot confer immunity from legal obligations.

This Published Document was published under the authority of the

Standards Policy and Strategy Committee on 30 November 2021.


Amendments/corrigenda issued since publication

Date Text affected


TECHNICAL REPORT

ISO/IEC TR

24027


First edition

2021-11



Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making

Technologie de l'information — Intelligence artificielle (IA) — Tendance dans les systèmes de l'IA et dans la prise de décision assistée par l'IA



Reference number ISO/IEC TR 24027:2021(E)


© ISO/IEC 2021

Contents Page


Foreword v

Introduction vi

  1. Scope 1

  2. Normative references 1

  1. Terms and definitions 1

    1. Artificial intelligence 1

    2. Bias 2

  1. Abbreviations 3

  2. Overview of bias and fairness 3

    1. General 3

    2. Overview of bias 3

    3. Overview of fairness 5

  3. Sources of unwanted bias in AI systems 6

    1. General 6

    2. Human cognitive biases 7

      1. General 7

      2. Automation bias 7

      3. Group attribution bias 8

      4. Implicit bias 8

      5. Confirmation bias 8

      6. In-group bias 8

      7. Out-group homogeneity bias 8

      8. Societal bias 9

      9. Rule-based system design 9

      10. Requirements bias 10

    3. Data bias 10

      1. General 10

      2. Statistical bias 10

      3. Data labels and labelling process 11

      4. Non-representative sampling 11

      5. Missing features and labels 11

      6. Data processing 12

      7. Simpson's paradox 12

      8. Data aggregation 12

      9. Distributed training 12

      10. Other sources of data bias 12

    4. Bias introduced by engineering decisions 12

      1. General 12

      2. Feature engineering 12

        6.4.3 Algorithm selection 13

            1. Hyperparameter tuning 13

            2. Informativeness 14

            3. Model bias 14

            4. Model interaction 14

  4. Assessment of bias and fairness in AI systems 14

    1. General 14

    2. Confusion matrix 15

    3. Equalized odds 16

    4. Equality of opportunity 16

    5. Demographic parity 17

    6. Predictive equality 17

      7.7 Other metrics 17

      © ISO/IEC 2021 – All rights reserved

      iii

  5. Treatment of unwanted bias throughout an AI system life cycle 17

    1. General 17

    2. Inception 17

      1. General 17

      2. External requirements 18

      3. Internal requirements 19

      4. Trans-disciplinary experts 19

      5. Identification of stakeholders 19

      6. Selection and documentation of data sources 20

      7. External change 20

      8. Acceptance criteria 21

    3. Design and development 21

      1. General 21

      2. Data representation and labelling 21

      3. Training and tuning 22

      4. Adversarial methods to mitigate bias 23

      5. Unwanted bias in rule-based systems 24

    4. Verification and validation 24

      1. General 24

      2. Static analysis of training data and data preparation 25

      3. Sample checks of labels 25

      4. Internal validity testing 25

      5. External validity testing 25

      6. User testing 26

      7. Exploratory testing 26

    5. Deployment 26

      1. General 26

      2. Continuous monitoring and validation 26

      3. Transparency tools 27

Annex A (informative) Examples of bias 28

Annex B (informative) Related open source tools 31

Annex C (informative) ISO 26000 – Mapping example 32

Bibliography 36


iv © ISO/IEC 2021 – All rights reserved

Foreword


ISO (the International Organization for Standardization) is a worldwide federation of national standards bodies (ISO member bodies). The work of preparing International Standards is normally carried out through ISO technical committees. Each member body interested in a subject for which a technical committee has been established has the right to be represented on that committee. International organizations, governmental and non-governmental, in liaison with ISO, also take part in the work. ISO collaborates closely with the International Electrotechnical Commission (IEC) on all matters of electrotechnical standardization.

The procedures used to develop this document and those intended for its further maintenance are described in the ISO/IEC Directives, Part 1. In particular, the different approval criteria needed for the different types of ISO documents should be noted. This document was drafted in accordance with the editorial rules of the ISO/IEC Directives, Part 2 (see www.iso.org/directives).

Attention is drawn to the possibility that some of the elements of this document may be the subject of patent rights. ISO shall not be held responsible for identifying any or all such patent rights. Details of any patent rights identified during the development of the document will be in the Introduction and/or on the ISO list of patent declarations received (see www.iso.org/patents).

Any trade name used in this document is information given for the convenience of users and does not constitute an endorsement.

For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and expressions related to conformity assessment, as well as information about ISO's adherence to the World Trade Organization (WTO) principles in the Technical Barriers to Trade (TBT), see www.iso.org/iso/foreword.html.

This document was prepared by Technical Committee ISO/IEC JTC 1 Information technology, Subcommittee SC 42, Artificial intelligence.

Any feedback or questions on this document should be directed to the user’s national standards body. A complete listing of these bodies can be found at www.iso.org/members.html.



© ISO/IEC 2021 – All rights reserved v

Introduction


Bias in artificial intelligence (AI) systems can manifest in different ways. AI systems that learn patterns from data can potentially reflect existing societal bias against groups. While some bias is necessary to address the AI system objectives (i.e. desired bias), there can be bias that is not intended in the objectives and thus represent unwanted bias in the AI system.

Bias in AI systems can be introduced as a result of structural deficiencies in system design, arise from human cognitive bias held by stakeholders or be inherent in the datasets used to train models. That means that AI systems can perpetuate or augment existing bias or create new bias.

Developing AI systems with outcomes free of unwanted bias is a challenging goal. AI system function behaviour is complex and can be difficult to understand, but the treatment of unwanted bias is possible. Many activities in the development and deployment of AI systems present opportunities for identification and treatment of unwanted bias to enable stakeholders to benefit from AI systems according to their objectives.

Bias in AI systems is an active area of research. This document articulates current best practices to detect and treat bias in AI systems or in AI-aided decision-making, regardless of source. The document covers topics such as:

  • an overview of bias (5.2) and fairness (5.3);

  • potential sources of unwanted bias and terms to specify the nature of potential bias (Clause 6);

  • assessing bias and fairness (Clause 7) through metrics;

  • addressing unwanted bias through treatment strategies (Clause 8).


    vi © ISO/IEC 2021 – All rights reserved




    Information technology — Artificial intelligence (AI) —

    Bias in AI systems and AI aided decision making


    1. Scope

      This document addresses bias in relation to AI systems, especially with regards to AI-aided decision- making. Measurement techniques and methods for assessing bias are described, with the aim to address and treat bias-related vulnerabilities. All AI system lifecycle phases are in scope, including but not limited to data collection, training, continual learning, design, testing, evaluation and use.


    2. Normative references

    ISO/IEC 229891), Information technology — Artificial intelligence — Artificial intelligence concepts and terminology

    ISO/IEC 230532), Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)


    3 Terms and definitions

    For the purposes of this document, the following terms and definitions given in ISO/IEC 22989 and ISO/ IEC 23053 and the following apply.

    ISO and IEC maintain terminological databases for use in standardization at the following addresses:

  • ISO Online browsing platform: available at https://www.iso.org/obp

  • IEC Electropedia: available at https://www.electropedia.org/


3.1 Artificial intelligence

3.1.1

maximum likelihood estimator

estimator assigning the value of the parameter where the likelihood function attains or approaches its highest value

Note 1 to entry: Maximum likelihood estimation is a well-established approach for obtaining parameter estimates where a distribution has been specified [for example, normal, gamma, Weibull and so forth]. These estimators have desirable statistical properties (for example, invariance under monotone transformation) and in many situations provide the estimation method of choice. In cases in which the maximum likelihood estimator is biased, a simple bias correction sometimes takes place.

[SOURCE: ISO 3534-1:2006, 1.35]

3.1.2

rule-based systems

knowledge-based system that draws inferences by applying a set of if-then rules to a set of facts

following given procedures

[SOURCE: ISO/IEC 2382:2015, 2123875]



  1. Under preparation. Stage at the time of publication: ISO/DIS 22989:2021.

  2. Under preparation. Stage at the time of publication: ISO/DIS 23053:2021.


© ISO/IEC 2021 – All rights reserved 1