Bias – A lurking danger that can convert algorithmic systems into discriminatory entities : A framework for bias identification and mitigation
Gasser, Thea (2019)
Gasser, Thea
2019
All rights reserved. This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-201905027218
https://urn.fi/URN:NBN:fi:amk-201905027218
Tiivistelmä
This thesis examines the existence of bias in algorithmic systems and presents them as the cause for unfair and discriminatory decisions that are made through the use of such systems. Extensive literature research was conducted to review current discussions on this issue. Content and thematic
analysis was applied to over 100 journal articles, books and websites to bring together proposals on how bias can be identified and reduced. The results of the analysis were further developed to provide precise measures for project teams building Artificial Intelligence systems.
The findings demonstrate that humankind aims to map Human Intelligence to Artificial Intelligence, but no system has reached such intelligence yet due to the lack of machine sentience and self-awareness. Therefore, the
human being maintains considerable influence on the design of the system.
Cognitive bias is very likely to be reflected in algorithmic systems. Awareness of the topic of bias must increasingly be addressed in project teams and appropriate measures must be applied. It will also be illustrated that standardization work is in progress and that the areas of AI-responsibility, AI-safety and AI-fairness will have high priority in the future. The importance of the topic of bias in algorithmic systems has been recognized by researchers and managers and the demand for fair AI-systems is high. The outcome of this thesis is a framework that contributes to AI-safety. This framework could be considered as a guideline and proposes measures for identifying and mitigating bias in algorithmic systems. It can be adapted and extended to the specific project context. Future collaborations and regulations among business, institutions and society are required to successfully address this issue.
analysis was applied to over 100 journal articles, books and websites to bring together proposals on how bias can be identified and reduced. The results of the analysis were further developed to provide precise measures for project teams building Artificial Intelligence systems.
The findings demonstrate that humankind aims to map Human Intelligence to Artificial Intelligence, but no system has reached such intelligence yet due to the lack of machine sentience and self-awareness. Therefore, the
human being maintains considerable influence on the design of the system.
Cognitive bias is very likely to be reflected in algorithmic systems. Awareness of the topic of bias must increasingly be addressed in project teams and appropriate measures must be applied. It will also be illustrated that standardization work is in progress and that the areas of AI-responsibility, AI-safety and AI-fairness will have high priority in the future. The importance of the topic of bias in algorithmic systems has been recognized by researchers and managers and the demand for fair AI-systems is high. The outcome of this thesis is a framework that contributes to AI-safety. This framework could be considered as a guideline and proposes measures for identifying and mitigating bias in algorithmic systems. It can be adapted and extended to the specific project context. Future collaborations and regulations among business, institutions and society are required to successfully address this issue.